On 10/11/2011 02:04 AM, Nicolas Sebrecht wrote:
Yes, you goofed by directly editing /etc/libvirt. By doing that,
you are going behind libvirt's back - if your edits happen to work,
then a libvirtd restart will use them, but if you introduce a typo
or other problem, then it is your fault that lib
Hi,
>> I made this change by editing the xml, restarting libvirtd, then using
>> virsh to define the xml file and received this message:
>>
>> virsh # define /etc/libvirt/qemu/bwimail02.xml
>> error: Failed to define domain from /etc/libvirt/qemu/bwimail02.xml
>> error: missing source information
On 10/11/11, Alex wrote:
> I made this change by editing the xml, restarting libvirtd, then using
> virsh to define the xml file and received this message:
>
> virsh # define /etc/libvirt/qemu/bwimail02.xml
> error: Failed to define domain from /etc/libvirt/qemu/bwimail02.xml
> error: missing sour
On 10/10/2011 03:52 PM, Alex wrote:
I made this change by editing the xml, restarting libvirtd, then using
virsh to define the xml file and received this message:
virsh # define /etc/libvirt/qemu/bwimail02.xml
error: Failed to define domain from /etc/libvirt/qemu/bwimail02.xml
error: missing sou
On Mon, Oct 10, 2011 at 05:52:28PM -0400, Alex wrote:
> Hi,
>
> >> I'm using
> >> deadline scheduler, the /var partition is mounted noatime, and the
> >> disk is mounted raw:
> >>
> >>
> >>
> >>
> >>
> >> >> function='0x0'/>
> >
> > I meant raw disk devices rather
Hi,
>> I'm using
>> deadline scheduler, the /var partition is mounted noatime, and the
>> disk is mounted raw:
>>
>>
>>
>>
>>
>> > function='0x0'/>
>
> I meant raw disk devices rather than files e.g.
>
>
>
>
>
I made this change by editing the x
On 10/10/11, Alex wrote:
> I thought RAID10 still involved RAID1 on all disks, so really the only
> improvement would be the lack of the parity write, correct? The
> wikipedia entry seems to indicate it's not all that much faster:
Parity write and parity calculations. For a RAID 1 or RAID 10 setu
On Sun, Oct 09, 2011 at 03:26:52PM -0400, Alex wrote:
>
> >>
> >>
> >>
> >>
> >> >> function='0x0'/>
> >
> > I meant raw disk devices rather than files e.g.
> >
> >
> >
> >
> >
> >
> > This eliminates one layer of filesystem overheads.
>
> Can
Hi,
>> The only thing I haven't done from above is to use ionice on mail
>> processes. I'm using RAID5 across three 1TB SATA3 disks,
>
> RAID 5 is another major bottleneck there. The commonly cited write
> penalty for RAID 5 appears to be 4 IOPS, while RAID 1/10 is 2 IOPS.
> Due to the typical loa
On 10/7/11, Alex wrote:
> Can I also ask how you measured performance and the effect any changes
> you made may have had on the system?
>
> iotop seems very general. Perhaps sar? Ideas for graphing its output?
Well for that situation, people were screaming at me so I didn't
really measured things
On 10/7/11, Alex wrote:
> The only thing I haven't done from above is to use ionice on mail
> processes. I'm using RAID5 across three 1TB SATA3 disks,
RAID 5 is another major bottleneck there. The commonly cited write
penalty for RAID 5 appears to be 4 IOPS, while RAID 1/10 is 2 IOPS.
Due to the
Hi,
>> iotop may report as much as 2M/s on the host, with an average of about
>> 400-600K/s. Does that seem like a lot? I can write like 80MB/s at
>> least using dd to test.
>
> I had similar problems previously. The crux in my case is the number
> of IOPS possible. 100K of 2KB file writes is stil
Hi,
>> This mail server does manage a lot of mail per day, but not enough to
>> even consume the 8GB I've allocated, and the "mailq" command typically
>> takes a few seconds to respond, even when there's only a few messages
>> in the queue.
>>
>> iotop may report as much as 2M/s on the host, with
On 10/6/11, Alex wrote:
> This mail server does manage a lot of mail per day, but not enough to
> even consume the 8GB I've allocated, and the "mailq" command typically
> takes a few seconds to respond, even when there's only a few messages
> in the queue.
>
> iotop may report as much as 2M/s on t
Hi,
>> Load on the server is regularly above 20, yet the processors
>> generally are idle and the host is still responsive.
>
> That's completely normal for an email server running spamassassin,
> in my experience, and has nothing to do with libvirt. IME, the
> issue is DNS lookups, which spamass
On Wed, Oct 05, 2011 at 02:28:30PM -0400, Alex wrote:
>
> Load on the server is regularly above 20, yet the processors
> generally are idle and the host is still responsive.
That's completely normal for an email server running spamassassin,
in my experience, and has nothing to do with libvirt. I
Hi,
I have a fedora15 x86_64 host with one fedora15 guest running
amavis+spamassassin+postfix and performance is horrible. The host is a
quad-core E13240 with 16GB and 3 1TB Seagate ST31000524NS and all
partitions are ext4. I've allocated 4 processors and 8GB of RAM to
this guest. I really hoped s
17 matches
Mail list logo