Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Run the database on the machine that stores the files and perform
> database access remotely over the net instead. ?

yes, but this doesn't resolve the performance issue with nfs



Aw: Re: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Can you partition the files into 2 different shares?  Put the database
> files in one share and access them using "sync", and put the rest of the
> files in a different share, with no "sync"?
this could be a solution, but I want to understand why is it so slow and fix 
that



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could try removing the "sync" option, just as an experiment, to see
> how much it is contributing to the slowdown.

If I don't use sync I got around 300MB/s  (tested with 600MB-file) .. that's ok 
(far from great), but since there are database files on the nfs it can cause 
file/database corruption, so we would like to use sync option

best regards
Stefan



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could test with noatime if you don't need access times.
> And perhaps with lazytime instead of relatime.
Mountoptions are:
type zfs (rw,xattr,noacl)
I get you point, but when you look at my fio output, the performance is quiet 
good

> Could you provide us
> nfsstat -v
server:
nfsstat -v
Server packet stats:
packetsudptcptcpconn
509979521   0  510004972   2

Server rpc stats:
calls  badcalls   badfmt badauthbadclnt
509971853   0  0  0  0

Server reply cache:
hits   misses nocache
0  0  509980028

Server io stats:
read   write
1587531840   3079615002

Server read ahead cache:
size   0-10%  10-20% 20-30% 30-40% 40-50% 50-60% 
60-70% 70-80% 80-90% 90-100%notfound
0  0  0  0  0  0  0  0  
0  0  0  0

Server file handle cache:
lookup anon   ncachedir  ncachenondir  stale
0  0  0  0  0

Server nfs v4:
null compound
2 0% 509976662 99%

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access   close
0 0% 0 0% 0 0% 5015903   0% 3091693   0%
commit   create   delegpurge   delegreturn  getattr
3146340% 1498360% 0 0% 1615740   0% 390748077 
20%
getfhlink lock locktlocku
2573550   0% 0 0% 170% 0 0% 150%
lookup   lookup_root  nverify  open openattr
3931149   0% 0 0% 0 0% 3131045   0% 0 0%
open_confopen_dgrdputfhputpubfh putrootfh
0 0% 3 0% 510522216 26% 0 0% 4 
0%
read readdir  readlink remove   rename
59976532  3% 4217910% 0 0% 4299650% 2445640%
renewrestorefhsavefh   secinfo  setattr
0 0% 0 0% 5422310% 0 0% 8453240%
setcltid setcltidconf verify   writerellockowner
0 0% 0 0% 0 0% 404569758 21% 0 
0%
bc_ctl   bind_connexchange_id  create_ses   destroy_ses
0 0% 0 0% 4 0% 2 0% 1 0%
free_stateid getdirdeleg  getdevinfo   getdevlist   layoutcommit
150% 0 0% 0 0% 0 0% 0 0%
layoutgetlayoutreturn secinfononam sequence set_ssv
0 0% 0 0% 2 0% 509980018 26% 0 
0%
test_stateid want_deleg   destroy_clid reclaim_comp allocate
100% 0 0% 1 0% 2 0% 164   0%
copy copy_notify  deallocate   ioadvise layouterror
2976670% 0 0% 0 0% 0 0% 0 0%
layoutstats  offloadcanceloffloadstatusreadplus seek
0 0% 0 0% 0 0% 0 0% 0 0%
write_same
0 0%


client:
nfsstat -v
Client packet stats:
packetsudptcptcpconn
0  0  0  0

Client rpc stats:
calls  retransauthrefrsh
37415730   0  37425651

Client nfs v4:
null read writecommit   open
1 0% 4107833  10% 30388717 81% 2516  0% 55493 0%
open_confopen_noatopen_dgrdclosesetattr
0 0% 1942520% 0 0% 2473800% 75890 0%
fsinfo   renewsetclntidconfirm  lock
459   0% 0 0% 0 0% 0 0% 4 0%
locktlockuaccess   getattr  lookup
0 0% 2 0% 1315330% 1497029   4% 3180560%
lookup_root  remove   rename   link symlink
1 0% 31656 0% 15877 0% 0 0% 0 0%
create   pathconf statfs   readlink readdir
7019  0% 458   0% 1705220% 0 0% 30007 0%
server_caps  delegreturn  getacl   setacl   fs_locations
917   0% 1181090% 0 0% 0 0% 0 0%
rel_lkowner  secinfo  fsid_present exchange_id  
create_session
0 0% 0 0% 0 0% 2 0% 1 0%
destroy_session  sequence get_lease_time   reclaim_comp layoutget
0 0% 0 0% 0  

Aw: Re: very poor nfs performance

2024-03-07 Thread Stefan K
Hi Ralph,

I just tested it with scp and I got 262MB/s
So it's not a network issue, just a NFS issue, somehow.

best regards
Stefan

> Gesendet: Donnerstag, 07. März 2024 um 11:22 Uhr
> Von: "Ralph Aichinger" 
> An: debian-user@lists.debian.org
> Betreff: Re: very poor nfs performance
>
> On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> > Hello guys,
> > 
> > I hope someone can help me with my problem.
> > Our NFS performance ist very bad, like ~20MB/s, mountoption looks
> > like that:
> 
> Are both sides agreeing on MTU (using Jumbo frames or not)?
> 
> Have you tested the network with iperf (or simiar), does this happen
> only with NFS or also with other network traffic?
> 
> /ralph
> 
>



very poor nfs performance

2024-03-07 Thread Stefan K
Hello guys,

I hope someone can help me with my problem.
Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:
rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none
The NFS server (debian 12) is a ZFS Fileserver with 40G network connection, 
also the read/write performance is good:
fio --rw=readwrite  --rwmixread=70 --name=test --size=50G --direct=1 
--bsrange=4k-1M --numjobs=30 --group_reporting 
--filename=/zfs_storage/test/asdfg --runtime=600 --time_based
   READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-11.9GB/s), 
io=6665GiB (7156GB), run=64-64msec
  WRITE: bw=4875MiB/s (5112MB/s), 4875MiB/s-4875MiB/s (5112MB/s-5112MB/s), 
io=2856GiB (3067GB), run=64-64msec

Only one nfs client(debian 12) is connected via 10G, since we host also 
database files on the nfs share, 'sync'-mountoption is important (more or 
less), but it should still be much faster than 20MB/s

so can somebody tell me whats wrong or what should I change to speed that up?

thanks in advance.
best regards
Stefan



high delay while prointing

2023-11-24 Thread Stefan K
Hello Debian guys,

I hope someone can help me with my problem because I'm a little bit frustrated 
and I don't have absolutely no clue how to fix that.

I have a Debian 12 based print server running, all printers are connected via 
IPP which works so far but it takes a while if we start printing multiple pages 
and then for each page, it takes >15s for the next page. It doesn't matter if 
we print PDF or simple text files.

For me, it looks like that cups is waiting until the printer says "I'm done". 
According to the Documentation I tried ?waitjob=false=false at the 
end of the deviceURI and restart cups, but it's still the same. This happens on 
different types of printers eg. HP PageWide Pro 477dw MFP or Kyocera TASKalfa 
4052ci

I can attach the error_log with debug information.

Ideas or solutions are very welcome ;-)

thanks in advance

best regards
Stefan



Re: systemd 248 in bullseye?

2021-03-05 Thread Stefan K
that is sad
any other ideas how can I done that? (but I will create a new thread for that



On Friday, March 5, 2021 10:55:36 AM CET didier gaumet wrote:
> Hello
>
> from what I understand, Bulleye's version of Systemd is now frozen to 247:
>   https://lists.debian.org/debian-devel-announce/2021/02/msg2.html
>
>





systemd 248 in bullseye?

2021-03-05 Thread Stefan K
Hello,

is there a chance that systemd 248 get in bullseye?
Systemd248 allow to unlocking encrypted volumes via FIDO2, which is very 
usefull nowadays.

best regards
Stefan




Re: looking for a nftables gui

2020-03-03 Thread Stefan K
Hi,

and thanks for this hint, will have a look into it. firt look is that it use 
XML-config syntax, right, thats not my favorite but ok i will try it.

Just to be more specific:
I will build a firewall (bare metal), behind the firewall I have 512 public IP 
addresses and I will manage the access rules, my boss and I favour a simple 
opensource-solution with just IP/Port access-rules


On Thursday, February 27, 2020 2:19:55 AM CET tv.deb...@googlemail.com wrote:
> On 26/02/2020 17:54, Stefan K wrote:
> > Hello,
> >
> > we're looking for a nftables gui/frontend.
> > We want to create a simple firewall (port/ip blocking) I took a look at 
> > vuurmuur[1], but it just support iptables. Does exist some other solutions?
> >
> > We don't want to config it via cli or config-files.
> >
> > Thanks for help!
> > best regards
> > Stefan
> >
> >
> > [1] https://www.vuurmuur.org/t
> >
>
> Hello, I believe "firewalld" fits your needs, it as a frontend available
> in the package "firewall-config" and a taskbar notification/status with
> "firewall-applet" that works in various desktop environments.
> The docs can walk you or your users though the basics and more [1].
>
> "gufw" + "ufw" while not designed for nftables also work with it thanks
> to iptables compatibility wrappers. The occasional bug was discussed on
> this list not long ago.
>
> Both have the advantage of being packaged in Debian.
>
>
> [1] https://firewalld.org/documentation/howto/
>
>





looking for a nftables gui

2020-02-26 Thread Stefan K
Hello,

we're looking for a nftables gui/frontend.
We want to create a simple firewall (port/ip blocking) I took a look at 
vuurmuur[1], but it just support iptables. Does exist some other solutions?

We don't want to config it via cli or config-files.

Thanks for help!
best regards
Stefan


[1] https://www.vuurmuur.org/t



Re: Fwd: alternative Firmware for BMC's

2019-07-02 Thread Stefan K
Hello,

a BMC or IPMI interface is a controller which is described here [1]

> But apparently you can replace BMC's 'firmware' with your own, something
> that OpenBMC tries to achieve. Main problem is - there are many
> BMC/ILOM/ILO, and OpenBMC targets a few of them. I'd give my hand for a
> sensible replacement of Sun's ILOM 'firmware', but I'm not aware of any.
Yes, but in this post i talk about Supermicro servers/mainboards and I try to 
find something for Aspeed AST2500 chips, but there is nothing :-(

But thanks, it looks like that the author made a mistake, maybe..


[1] 
https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface#IPMI_components



On Monday, July 1, 2019 3:48:50 PM CEST rhkra...@gmail.com wrote:
> I don't know how many people are aware of what the BMC is / does -- after a 
> little googling, my tentative understanding is that this is the "extra" 
> miroprocessor that is now included in some (Intel and maybe AMD?) CPUs?
> 
> What are you trying to do with it?
> 
> If you're setting up a computer that happens to have one of those, can you 
> just ignore it, or must it do something to allow the main CPU to run?
> 
> On Monday, July 01, 2019 04:27:48 AM Stefan K wrote:
> > Hello everybody,
> > 
> > So I don't get any answers from the debian-user-german mailinglist I ask
> > you ;)
> > 
> > short question: when I read [1] (sorry just in german), it says that
> > several vendors install their own BMC software up there. So I ask myself
> > which one? OpenBMC doesn't really seem to work yet and I can't think of
> > anything else. I've already written to the author (several times) but
> > without reaction. :( Do you have any idea what alternatives there are
> > (ideally which ones work)?
> > 
> > [1]
> > https://www.golem.de/news/pantsdown-fehler-in-server-firmware-ermoeglicht-
> > rootkit-1901-138956.html
> > 
> > best regards
> > Stefan
> > 
> > --  Forwarded Message  --
> > 
> > Subject: alternative Firmware für BMC's
> > Date: Tuesday, June 25, 2019, 3:21:03 PM CEST
> > From: Stefan K 
> > To: debian-user-ger...@lists.debian.org
> > 
> > Hallo in die Runde,
> > 
> > kurze Frage: wenn ich [1] lese, steht dort dass diverse Anbieter ihre
> > eigene BMC-Software rauf klatschen. Da stellt sich mir die Frage welche
> > denn? OpenBMC scheint noch nicht wirklich zu gehen und ansonsten fällt mir
> > da nichts ein. Ich hab auch schon den Autor angeschrieben (mehrmals)
> > jedoch ohne Reaktion. :( Habt ihr denn dazu eine Idee was es da denn an
> > Alternativen gibt (idealerweise welche die auch funktionieren)?
> > 
> > [1]
> > https://www.golem.de/news/pantsdown-fehler-in-server-firmware-ermoeglicht-
> > rootkit-1901-138956.html
> > 
> > MfG
> > Stefan
> > 
> > 
> > -
> 





Fwd: alternative Firmware for BMC's

2019-07-01 Thread Stefan K
Hello everybody,

So I don't get any answers from the debian-user-german mailinglist I ask you ;)

short question: when I read [1] (sorry just in german), it says that several 
vendors install their own BMC software up there. So I ask myself which one? 
OpenBMC doesn't really seem to work yet and I can't think of anything else. 
I've already written to the author (several times) but without reaction. :(
Do you have any idea what alternatives there are (ideally which ones work)?

[1] 
https://www.golem.de/news/pantsdown-fehler-in-server-firmware-ermoeglicht-rootkit-1901-138956.html

best regards
Stefan

--  Forwarded Message  --

Subject: alternative Firmware für BMC's
Date: Tuesday, June 25, 2019, 3:21:03 PM CEST
From: Stefan K 
To: debian-user-ger...@lists.debian.org

Hallo in die Runde,

kurze Frage: wenn ich [1] lese, steht dort dass diverse Anbieter ihre eigene 
BMC-Software rauf klatschen. Da stellt sich mir die Frage welche denn? OpenBMC 
scheint noch nicht wirklich zu gehen und ansonsten fällt mir da nichts ein. Ich 
hab auch schon den Autor angeschrieben (mehrmals) jedoch ohne Reaktion. :(
Habt ihr denn dazu eine Idee was es da denn an Alternativen gibt (idealerweise 
welche die auch funktionieren)?

[1] 
https://www.golem.de/news/pantsdown-fehler-in-server-firmware-ermoeglicht-rootkit-1901-138956.html

MfG
Stefan


-




Re: sporadic successfull mount of nfs with Stretch

2019-05-28 Thread Stefan K
I solved this issue by installing the latest backport kernel



On Sunday, May 26, 2019 7:57:50 PM CEST deloptes wrote:
> Stefan K wrote:
>
> > I also try to use _netdev as mountoptions, but it didn't work.
> > Has anyone an idea how to solve this?
>
> I use defaults,retry=5,_netdev - never had an issue with this. I must admin
> I still use init and not systemd. But when testing with systemd I do not
> recall having problem. I think I switched back to init after the mounts
> were setup. If using systemd, check your dependencies.
>
>
> regards
>
>





sporadic successfull mount of nfs with Stretch

2019-05-22 Thread Stefan K
Hello,

we've some problems with Debian stretch which try to mount nfs4 shares at 
boot-time, sometimes it works and sometimes not. If its not mounting during 
start I can mount it after I login without problems.

A successfull (re)boot look like [1] and a (re)boot which doesn't mount the 
shares look like [2].

For me it looks like that it comes to a race condition and it try to mount it 
without a working networkstack. the /etc/fstab looks like:
srv-storage:/scratch  /scratchnfs defaults0 2
srv-storage:/home /data/home  nfs defaults0 2

I also try to use _netdev as mountoptions, but it didn't work.
Has anyone an idea how to solve this?

best regards
Stefan

[1] https://debianforum.de/forum/pastebin/?mode=view=40729
[2] https://debianforum.de/forum/pastebin/?mode=view=40728



Re: ntp-client does not sync with server

2019-02-28 Thread Stefan K
Hi John,

yes there are synced, if I run 'ntpdate timeserv.domain.ag' they syncd 
everything fine, if I start ntp-server after 2-3Days I've an delay of few 
seconds.
Maybe I schould ask on the ntp-mailing list?!

best regards
Stefan


On Friday, March 1, 2019 7:01:32 AM CET john doe wrote:
> On 2/28/2019 9:49 AM, Stefan K wrote:
> > Hallo,
> >
> > we have our own ntp-server which is running Ubuntu 14.04.LTS.
> > This Server works fine:
> > ntpq -pn
> >  remote   refid  st t when poll reach   delay   offset  
> > jitter
> > ==
> > *178.63.9.110129.69.1.153 2 u   56  128  377   22.6580.249   
> > 0.053
> > -85.214.38.116   192.53.103.108   2 u   54  128  3772.101   -5.105   
> > 0.051
> > +37.58.57.238192.53.103.103   2 u   50  128  377   20.9710.839   
> > 0.114
> > +94.130.76.108   192.53.103.108   2 u   56  128  377   24.007   -1.009   
> > 0.163
> >
> >
> > But on our Debian Strech server they all got Stratum16 and doesn't sync:
> > ntpq -p
> >  remote   refid  st t when poll reach   delay   offset  
> > jitter
> > ==
> >  timeserv.domain.ag .INIT.  16 u- 102400.0000.000   
> > 0.000
> >
> 
> Is the date somewhat in synk with the server?
> 
> http://lists.ntp.org/listinfo
> 
> --
> John Doe
> 
> 



ntp-client does not sync with server

2019-02-28 Thread Stefan K
Hallo,

we have our own ntp-server which is running Ubuntu 14.04.LTS.
This Server works fine:
ntpq -pn
 remote   refid  st t when poll reach   delay   offset  jitter
==
*178.63.9.110129.69.1.153 2 u   56  128  377   22.6580.249   0.053
-85.214.38.116   192.53.103.108   2 u   54  128  3772.101   -5.105   0.051
+37.58.57.238192.53.103.103   2 u   50  128  377   20.9710.839   0.114
+94.130.76.108   192.53.103.108   2 u   56  128  377   24.007   -1.009   0.163


But on our Debian Strech server they all got Stratum16 and doesn't sync:
ntpq -p
 remote   refid  st t when poll reach   delay   offset  jitter
==
 timeserv.domain.ag .INIT.  16 u- 102400.0000.000   
0.000


In Debian Jessie (and the same configuration/network) it works fine and get a 
Stratum3, also an 'ntpdate timeserv.domain.ag' works on Debian Stretch.

The firewall is not a problem, because with nmap and ntp-script it works fine:
nmap -sU -p 123 --script ntp-info timeserv.domain.ag

Starting Nmap 7.40 ( https://nmap.org ) at 2019-02-26 09:25 CET
Nmap scan report for timeserv.domain.ag (194.94.xxx.xxx)
Host is up (0.00017s latency).
PORTSTATE SERVICE
123/udp open  ntp
| ntp-info: 
|   receive time stamp: 2019-02-26T08:26:05
|   version: ntpd 4.2.6p5@1.2349-o Fri Jul  6 20:19:54 UTC 2018 (1)
|   processor: x86_64
|   system: Linux/3.13.0-161-generic
|   leap: 0
|   stratum: 3
|   precision: -20
|   rootdelay: 31.589
|   rootdisp: 42.723
|   refid: 178.63.9.110
|   reftime: 0xe01f73ce.3608c5f7
|   clock: 0xe01f768f.e9bd2105
|   peer: 57654
|   tc: 10
|   mintc: 3
|   offset: 0.008
|   frequency: 27.593
|   sys_jitter: 1.020
|   clk_jitter: 0.180
|_  clk_wander: 0.014\x0D
Service Info: OS: Linux/3.13.0-161-generic

Nmap done: 1 IP address (1 host up) scanned in 13.75 seconds

and it also looks like that the client try to query the tnp-server because I 
can find the IP-Address of the Server under the section 'private clients':
nmap -sU -pU:123 -Pn -n --script=ntp-monlist timeserv.domain.ag

Can somebody help me with this issue?

Thanks in advance!

best regards stefan
Stefan



Re: do you find old firefox is better than new one?

2018-12-17 Thread Stefan K
Hi Long Wind,

I use 60.4.0esr on Debian Stretch on a Lenovo T460, bevor I use Google Chrome 
(cause it can handle a lot of open tabs, and use for every tab a seperate 
process). With the 60.4.0esr everything works fine, at the moment I've more 
than 70 tabs open (in my "main" firefox window) and everything works fine. 
At the moment I don't use Chrome anymore, cause since FF Quantum, Mozilla did a 
great job with that. It's fast, never crashed and it just working. A hint: I 
delete all my .mozilla-files before, so I started with a "fresh" firefox 
profile. I guess thats a reason why I dont have problems ;)

best regards
Stefan



On Sunday, December 16, 2018 9:32:07 AM CET Long Wind wrote:


i have 52.9.0 and 45.9.0, both for stretch


new one often becomes unresponsive, 


and i have to close it and restart itit often happens when i first start it


maybe some function/service is blocked in China


it seems it's doing something impossible, and takes much cpu resource 


but old firefox also face blocking



i can't describe it in more details or reproduce problem






Re: What is Firefox on Debian Stretch nearest future?

2018-08-30 Thread Stefan K
Hi,

Debian will be upgrade to Firefox 60.2 esr.,it will be available at 2018-09-05

more information about this:
https://www.debian.org/releases/stretch/amd64/release-notes/ch-information.html#browser-security
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815006 (chapter "About 
stable releases")


On Wednesday, August 29, 2018 7:23:56 PM CEST Juan R. de Silva wrote:
> Hi folks,
> 
> As far as I understend Mozilla is going to stop supporting Firefox ESR 59 
> this 
> August. Does it mean that Firefox Quantum is comming to Debian Stretch before 
> long? Is any information available?
> 
> Thanks
> 
> 
> 



Re: Slow XFS write

2018-08-23 Thread Stefan K
Hello,

did you really need the hardware controller? I suggest to use Software/MD raid, 
ob btrfs with raid1 or zfs with raidz1



On Wednesday, August 22, 2018 11:41:08 AM CEST Martin LEUSCH wrote:
> Hi,
> 
> 
> I have a NFS server with a hardware RAID5 on 3 HDD of 6 TB. I have a 
> system partition with ext4 and a data partition with XFS.
> 
> I get only 10 MB/s in write speed on the XFS data partition and 80 MB/s 
> on the system partition.
> 
> 
> XFS mount option:
> 
> /dev/sda4 on /var/srv/nfs type xfs (rw,relatime,attr2,inode64,noquota)
> 
> Is there a way to have better performance? Is XFS a good choice in this 
> situation?
> 
> 
> 



Aw: Re: Re: does btrfs have a future? (was: feature)

2018-08-20 Thread Stefan K
> Your English is fine.
thanks for that. :)

> Ignoring Raid5/6 and similar, I don't know what features btrfs is
> lacking that make ZFS more attractive. 
I guess it is the very poor performance, so you cant use it as backend 
fileserver, amilserver etc, and you also can't SSDs for caching like ZFS do. 
The only thing where you can use btrfs is for / (root) nothing more because for 
that you don't need much performance. If they fixed that performance thing (up 
to ext4 level), then btrfs will have chance.

> Btrfs's killer feature, imo, is its Copy-On-Write features
yep you're right, specially the Snapshot thing, you can use it before you make 
upgrades (I think the new Linux Mint version does that).


best regards
Stefan

> Gesendet: Mittwoch, 15. August 2018 um 16:50 Uhr
> Von: "Matthew Crews" 
> An: "Stefan K" , debian-user@lists.debian.org
> Betreff: Re: Aw: Re: does btrfs have a future? (was: feature)
>
> On 8/15/18 2:25 AM, Stefan K wrote:
> > Did you think that "only" the RAID5/6 problem is the reason why btrfs is 
> > not so common? what is with the performance? and some (important) featrures 
> > (not futures ;) ) are missing to catch up ZFS.
> > 
> > best regards
> > Stefan
> > (sorry for my bad english)
> > 
> 
> Your English is fine. Not perfect (no one ever is), but I know plenty of
> native speakers who speak it worse than you.
> 
> In my opinion btrfs has a bad rap partially because of the RAID5/6
> situation, but also because for a long time it was marked as
> experimental, and there are some situations where data loss has occured
> (I'm guessing because of RAID5/6). But as long as you avoid RAID5/6 and
> stick to RAID1/10, you should be fine.
> 
> Ignoring Raid5/6 and similar, I don't know what features btrfs is
> lacking that make ZFS more attractive. Btrfs does have *nice* features
> that ZFS currently lacks, like adding and removing disks to the array
> on-the-fly and intelligent data balancing while the array is mounted.
> 
> Btrfs's killer feature, imo, is its Copy-On-Write features, which you
> can read about on the Arch Wiki:
> 
> https://wiki.archlinux.org/index.php/Btrfs#Copy-on-Write_.28CoW.29
> 
> Btrfs also corrects read errors on-the-fly, something ZFS doesn't do,
> but only if you are using a RAID with some level of redundancy.
> 
> 
> 



Aw: Re: does btrfs have a future? (was: feature)

2018-08-15 Thread Stefan K
Did you think that "only" the RAID5/6 problem is the reason why btrfs is not so 
common? what is with the performance? and some (important) featrures (not 
futures ;) ) are missing to catch up ZFS.

best regards
Stefan
(sorry for my bad english)


> Gesendet: Dienstag, 14. August 2018 um 21:09 Uhr
> Von: "Matthew Crews" 
> An: "Anders Andersson" 
> Cc: "Debian users mailing list" 
> Betreff: Re: does btrfs have a future? (was: feature)
>
> ‐‐‐ Original Message ‐‐‐
> On August 14, 2018 6:54 AM, Anders Andersson  wrote:
> 
> > On Tue, Aug 14, 2018 at 12:26 PM, Stefan K shado...@gmx.net wrote:
> > Before people start discussingfeatures, note that OP uses the
> > mostly non-standard spelling "feature" when he means "future".
> 
> Good catch, I thought the subject was strange.
> 
> I think btrfs does have a future once they work out the Raid5/6 write hole, 
> and patch in a few quality of life changes.
> 
> On the other hand, if ZFS is ever relicensed to be GPL-compatible (and 
> therefore includable in the Linux kernel directly), then I think that will 
> kill btrfs outright.
> 
>



Aw: Re: does btrfs have a feature?

2018-08-15 Thread Stefan K
oohh sh**,
sorry for that, but I guess your know what I mean ;)

> Gesendet: Dienstag, 14. August 2018 um 15:54 Uhr
> Von: "Anders Andersson" 
> An: "Debian users mailing list" 
> Betreff: Re: does btrfs have a feature?
>
> On Tue, Aug 14, 2018 at 12:26 PM, Stefan K  wrote:
> > In the beginning of btrfs, most blogs, websites, magazins said btrfs will 
> > be THE next standard linux filesystem, so now after araound 10years it 
> > doesn't look so good, or?
> >
> > Who use btrfs in production? What do you think - does have btrfs a feature 
> > (because ZFS on Linux is more and more stable, RedHat said we don't want 
> > btrfs anymore and focus to xfs)
> >
> > I use btrfs on some new bare-metal machines for the root-disks, because it 
> > has a build-in RAID1 and snapshots, I know LVM and md-raid have also this 
> > possibilities but in btrfs it is much easier. I don't use it for data or 
> > other things(mail, database, etc), cause it is slow compared to ext4/xfs. 
> > I'm also wondering why the hell btrfs don't support ssd's for caching like 
> > zfs.
> 
> Before people start discussing *features*, note that OP uses the
> mostly non-standard spelling "feature" when he means "future".
> 
> 



does btrfs have a feature?

2018-08-14 Thread Stefan K
Hello,

I'm just just curious. 
In the beginning of btrfs, most blogs, websites, magazins said btrfs will be 
THE next standard linux filesystem, so now after araound 10years it doesn't 
look so good, or? 

Who use btrfs in production? What do you think - does have btrfs a feature 
(because ZFS on Linux is more and more stable, RedHat said we don't want btrfs 
anymore and focus to xfs)

I use btrfs on some new bare-metal machines for the root-disks, because it has 
a build-in RAID1 and snapshots, I know LVM and md-raid have also this 
possibilities but in btrfs it is much easier. I don't use it for data or other 
things(mail, database, etc), cause it is slow compared to ext4/xfs. I'm also 
wondering why the hell btrfs don't support ssd's for caching like zfs.

thanks for you opinions!

best regards
Stefan 



Aw: Re: session trunking with NFS

2018-06-27 Thread Stefan K
Hi,

today i tried it, but it didn't work:
on my nfs-test system I use the 2x1GB interfaces
showmount -e 
and
showmount -e 
shows me the exports
so now i mount the nfs-share on a server with 10G Interfaces(bond), when i 
mount it with the second NFS-IP, I got an error "mount.nfs: mount(2): Device or 
resource busy"

Did I something wrong?

best regards
Stefan

> Gesendet: Dienstag, 26. Juni 2018 um 09:07 Uhr
> Von: Reco 
> An: debian-user@lists.debian.org
> Betreff: Re: session trunking with NFS
>
>   Hi.
> 
> On Tue, Jun 26, 2018 at 08:57:25AM +0200, Stefan Krueger wrote:
> > Hello,
> > 
> > so far as I know Debian stretch is shipped with NFS-Version 4.2. The RFC[1] 
> > said NFSv4.1 has the capability for sessiontrunking to speed up the 
> > performance/throughput, so my question is how can I archiv this? How to 
> > configure the NFS-server and how to mount it on the client-side? There is 
> > no hint in the manpage for this.
> 
> The way they describe the feature at [1], it does not seem being that useful.
> 
> Assuming that you don't need a bunch of kernel patches ([1] describes
> Debian 7.9), all you need to do is obtain an NFS server with multiple
> non-bonded network interfaces, a client with the same, and mount NFS
> share several times into the same directory.
> 
> And all you get out of this is the ability to utilize several network
> links on both NFS client and server for a single client.
> 
> Personally I'd rather use conventional network bonding on NFS server,
> and be done with it.
> 
> [1] http://packetpushers.net/multipathing-nfs4-1-kvm/
> 
> Reco
> 
> 



USB-Platte mounten

2004-08-11 Thread Stefan K.
Hi, Liste ...

... folgendes Problem:

Ich habe mehrere USB-Festplatten, die ich betreiben will. Mit dem alten
2.4.er-Kernel (und murasaki, usb-storage und hotplug) kein Problem. Mit
dem 2.6.er-Kernel (und den gleichen - zum Teil nachkompilierten -
Paketen) keine Erkennung, bzw. Linux erkennt zwar, das was am USB-Port
hängt, aber nicht was. im 2.4.er wurde mir die Platte ordnungsgemäß mit
Hersteller etc. angezeigt Und liess sich mit /dev/sda /mnt/wechselplatte
mounten.
---
dmesg sagt:

...
scsi1 : SCSI emulation for USB Mass Storage devices
  Vendor: SAMSUNG   Model: SP1604N   Rev: TM10
  Type:   Direct-Access  ANSI SCSI revision: 02
SCSI device sda: 312581809 512-byte hdwr sectors (160042 MB)
sda: assuming drive cache: write through
 /dev/scsi/host1/bus0/target0/lun0: p1
Attached scsi disk sda at scsi1, channel 0, id 0, lun 0
USB Mass Storage device found at 3
NTFS driver 2.1.14 [Flags: R/O MODULE].
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
NTFS-fs error (device sda): read_ntfs_boot_sector(): Primary boot sector
is invalid.
NTFS-fs error (device sda): read_ntfs_boot_sector(): Mount option
errors=recover not used. Aborting without trying to recover.
NTFS-fs error (device sda): ntfs_fill_super(): Not an NTFS volume.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
NTFS-fs error (device sda): read_ntfs_boot_sector(): Primary boot sector
is invalid.
NTFS-fs error (device sda): read_ntfs_boot_sector(): Mount option
errors=recover not used. Aborting without trying to recover.
NTFS-fs error (device sda): ntfs_fill_super(): Not an NTFS volume.
FAT: invalid media value (0xb9)
VFS: Can't find a valid FAT filesystem on dev sda.
usb 1-2: USB disconnect, address 3
usb 1-2: new high speed USB device using address 4
scsi2 : SCSI emulation for USB Mass Storage devices
  Vendor: SAMSUNG   Model: SP1604N   Rev: TM10
  Type:   Direct-Access  ANSI SCSI revision: 02
SCSI device sda: 312581809 512-byte hdwr sectors (160042 MB)
sda: assuming drive cache: write through
 /dev/scsi/host2/bus0/target0/lun0: p1
Attached scsi disk sda at scsi2, channel 0, id 0, lun 0
USB Mass Storage device found at 4
athene:~#
---
wo liegt das Problem?

MfG

Stefan


-- 
Haeufig gestellte Fragen und Antworten (FAQ): 
http://www.de.debian.org/debian-user-german-FAQ/

Zum AUSTRAGEN schicken Sie eine Mail an [EMAIL PROTECTED]
mit dem Subject unsubscribe. Probleme? Mail an [EMAIL PROTECTED] (engl)