Re: [quadstor-virt] Re: FC access, ispmod driver problem?

2016-06-30 Thread QUADStor Support
Write Size:580.25 GB
Write Ops:353019

The above are the cumulative statistics. Not the actual space used by
the vdisks.

3HP RAID 5 A0500D3FP85AC0DX3RI00DC050da3Default
610.53 GB295.49 GBD LModifyRemove

The actual space used by the vdisk would be around 280 GB (the vdisk
usage statistics shown in under Physical Storage)


On Thu, Jun 30, 2016 at 4:31 PM, Dmitry Polezhaev  wrote:
>> Around 96% of a physical disk space is useable for VDisk data. For the
>> 610 GB disk this translates to 585 GB which is around the mark when
>> the out of space erorrs occured
>
> Thank You, this is important data. But how the 300 GB virtual disk under
> heavy load might occupy ~585 GB without snapshoting or something else space
> consumptive?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] http://www.quadstor.com/ Downloads Are Now Password Protected

2016-08-17 Thread QUADStor Support
The enterprise editon will eventually require a support subscription,
although the downloads are accessible. The open source edition which
will always be free and maintained is available at
http://www.quadstor.com/storage-virtualization-source-install.html

Both are compatible with each other, no difference in features. The
only slight change as of now is that the enterprise edition does not
use log disks. Due to this performance and scalabilty will be slightly
different with no bias towards either edition. Also the enterprise
edition works on a limited set of distributions.

The real reason why we cannot make the enterprise edition downloads
private is because we haven't been able to test the compatibility
between the two and sort out issues. And if we cannot, things will
continue the way they are.

A similar topic was discussed in this group a couple of years ago,
things haven't changed much since then.

On Tue, Aug 16, 2016 at 6:55 AM, Jonathan LaPlaca
 wrote:
> Does anyone know why the downloads on http://www.quadstor.com/ are now
> password protected.
>
> I'm having issues with quadstor causing a crash on me and I would like to
> get a new copy.
>
> Thanks
> Jonathan
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Kernel 4.x?

2016-08-23 Thread QUADStor Support
You probably can wait for a 3 - 4 weeks. There is an update to the
3.0.82 release to bring the changes on par with the 3.2 release. Then
4.4 will be supported.

On Tue, Aug 23, 2016 at 11:02 PM, Mark Syms  wrote:
> Hi,
>
> Can anyone tell me what the latest 4.x kernel supported by the open source
> 3.0.82 release is? I see the changelog mentions build updates for 4.2.x, are
> these generally applicable for later versions, e.g. the long term support
> version 4.4 or the current stable 4.7or are more updates required for the
> later kernel versions? I know I can just pull it down and try but I'd rather
> avoid the wasted effort if I can.
>
> Thanks,
>
> Mark.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] ssd cache + qs vdisks on linux

2016-09-19 Thread QUADStor Support
You could try btrace on the device to see the IO activity.

If you are running tests with zeroed data, then they will be
deduplicated. So there will be negiligible data written to the SSD
disk by the vdisk.

On Mon, Sep 19, 2016 at 4:33 AM, Zac Durham  wrote:
> There's a few topics out there on this subject but I can't get my head
> around any of them
>
> I have 3 x 2TB in software (mdadm) RAID5 (yeah, yeah I know) and a spare
> 120G SSD to use as cache. Currently I use flashcache to back the entire md
> device but do not realize any benefit in doing so (nor do I see any activity
> on the cache device when performing I/O against the array).
>
> Where is my approach flawed? I've also tried fronting the qs vdisks
> themselves with the SSD device to no avail either.
>
> Any help would be appreciated.
>
> -Z
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] ssd cache + qs vdisks on linux

2016-09-20 Thread QUADStor Support
Please send us the diagnostics (HTML UI -> System Page -> Run
Diagnostics) to supp...@quadstor.com

Also please run btrace against the following devices when running the fio device
btrace -w 120 /dev/quadstor/ > /tmp/vdisk.out 2>&1 &
btrace -w 120  > /tmp/cache.out 2>&1 &
btrace -w 120 /dev/ > /tmp/ssd.out 2>&1 &
btrace -w 120 /dev/ > /tmp/md.out 2>&1 &

The btrace command will terminate after two minutes.

Please note that the data written to the SSD cache need not be sent to
the md device yet. It will be done so only when the cache is filling
up. If that is what you are seeing, then there is no issue here.

On Tue, Sep 20, 2016 at 5:06 AM, Zac Durham  wrote:
> I'm running randread and randwrites with fio. I would expect something out
> of the cache device- instead nothing.
>
> I've considered using bcache, but I would have to build that for centos 7.
> Not out of the question, just not in my immediate wheelhouse.
>
> Thanks for the response, very much.
>
> Zac
>
> On Monday, September 19, 2016 at 4:41:45 PM UTC-4, quadstor wrote:
>>
>> You could try btrace on the device to see the IO activity.
>>
>> If you are running tests with zeroed data, then they will be
>> deduplicated. So there will be negiligible data written to the SSD
>> disk by the vdisk.
>>
>> On Mon, Sep 19, 2016 at 4:33 AM, Zac Durham  wrote:
>> > There's a few topics out there on this subject but I can't get my head
>> > around any of them
>> >
>> > I have 3 x 2TB in software (mdadm) RAID5 (yeah, yeah I know) and a spare
>> > 120G SSD to use as cache. Currently I use flashcache to back the entire
>> > md
>> > device but do not realize any benefit in doing so (nor do I see any
>> > activity
>> > on the cache device when performing I/O against the array).
>> >
>> > Where is my approach flawed? I've also tried fronting the qs vdisks
>> > themselves with the SSD device to no avail either.
>> >
>> > Any help would be appreciated.
>> >
>> > -Z
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] QUADStor on CentOS 7.2 with mdraid - Dependency failed for Local File System

2016-10-18 Thread QUADStor Support
We need to check the systemd unit file. Its possible that we are
trying to mount earlier than possible.

Does the following result in an auto-mount ?

umount /mnt/quadstor-vv1
service quadstor restart

On Wed, Oct 19, 2016 at 12:36 AM, Aleksey Maksimov
 wrote:
> Hi all!
>
> I have successfully installed QuadStor 3.2.9 on CentOS 7.2 and created a
> virtual disk VDisk1 on top of the array mdadm (6 raid)
>
> # lsblk /dev/quadstor/VDisk1
>
> NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> sdu   65:64   0  3.7T  0 disk
>
> Then I created ext4 disk partition.
>
> Partition is mounted without problems
>
> # mkfs.ext4 /dev/quadstor/VDisk1
> # mkdir /mnt/quadstor-vv1
> # mount /dev/quadstor/VDisk1 /mnt/quadstor-vv1
> # df -H /dev/quadstor/VDisk1
>
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdu4.0T   93M  3.8T   1% /mnt/quadstor-vv1
>
> But this partition does not start when operating system starts
>
> ...
> Dependency failed for /mnt/quadstor-vv1
> ...
> Dependency failed for Local File System
> ...
>
> My /etc/fstab item for Vdisk:
>
> /dev/quadstor/VDisk1 /mnt/quadstor-vv1/  ext4 defaults 0 0
>
> And I also created the file /quadstor/etc/fstab.custom with the same item.
> But it does not help
>
> What could be the cause of the problem?
>
> PS: I sent several emails to the address supp...@quadstor.com but no one
> answers. Support not working??
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] QUADStor on CentOS 7.2 with mdraid - Dependency failed for Local File System

2016-10-18 Thread QUADStor Support
Also does modifying the fstab entry to the following work ?

/dev/quadstor/VDisk1 /mnt/quadstor-vv1/  ext4
default,noauto,x-systemd.automount 0 0

On Wed, Oct 19, 2016 at 3:06 AM, QUADStor Support  wrote:
> We need to check the systemd unit file. Its possible that we are
> trying to mount earlier than possible.
>
> Does the following result in an auto-mount ?
>
> umount /mnt/quadstor-vv1
> service quadstor restart
>
> On Wed, Oct 19, 2016 at 12:36 AM, Aleksey Maksimov
>  wrote:
>> Hi all!
>>
>> I have successfully installed QuadStor 3.2.9 on CentOS 7.2 and created a
>> virtual disk VDisk1 on top of the array mdadm (6 raid)
>>
>> # lsblk /dev/quadstor/VDisk1
>>
>> NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
>> sdu   65:64   0  3.7T  0 disk
>>
>> Then I created ext4 disk partition.
>>
>> Partition is mounted without problems
>>
>> # mkfs.ext4 /dev/quadstor/VDisk1
>> # mkdir /mnt/quadstor-vv1
>> # mount /dev/quadstor/VDisk1 /mnt/quadstor-vv1
>> # df -H /dev/quadstor/VDisk1
>>
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/sdu4.0T   93M  3.8T   1% /mnt/quadstor-vv1
>>
>> But this partition does not start when operating system starts
>>
>> ...
>> Dependency failed for /mnt/quadstor-vv1
>> ...
>> Dependency failed for Local File System
>> ...
>>
>> My /etc/fstab item for Vdisk:
>>
>> /dev/quadstor/VDisk1 /mnt/quadstor-vv1/  ext4 defaults 0 0
>>
>> And I also created the file /quadstor/etc/fstab.custom with the same item.
>> But it does not help
>>
>> What could be the cause of the problem?
>>
>> PS: I sent several emails to the address supp...@quadstor.com but no one
>> answers. Support not working??
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "QUADStor Storage Virtualization" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to quadstor-virt+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: QUADStor on CentOS 7.2 with mdraid - Dependency failed for Local File System

2016-10-21 Thread QUADStor Support
We used to mount the vdisk through a udev rule. That does not seem to
work for all configurations and also shutdown/unmount of the file
system fails which then requires a system reboot.

3.2.10 fixes this which can be downloaded from
http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html

3.2.9 can fail to uninstall on RHEL7/Debian 8. To fix this
chkconfig quadstor off
reboot
Then proceed with the uninstall

The fstab entries need to have the following options
defaults,nofail,noauto

The noauto is to ensure that the filesystem isn't mounted
automatically since its done by the quadstor init script. (nofail is
probably irrelevant when noauto is specified)


On Wed, Oct 19, 2016 at 11:27 AM, Aleksey Maksimov
 wrote:
>> Does the following result in an auto-mount ?
>> umount /mnt/quadstor-vv1
>> service quadstor restart
>
>
> # umount /mnt/quadstor-vv1
> # service quadstor restart
> Restarting quadstor (via systemctl):  [  OK  ]
>
>
> Don't quite understand what we expect from those commands.
>
>
>> /dev/quadstor/VDisk1 /mnt/quadstor-vv1/  ext4
>> default,noauto,x-systemd.automount 0 0
>
>
> You probably meant this?:
> /dev/quadstor/VDisk1 /mnt/quadstor-vv1/ ext4
> defaults,noauto,x-systemd.automount 0 0
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: QUADStor on CentOS 7.2 with mdraid - Dependency failed for Local File System

2016-10-21 Thread QUADStor Support
Just /etc/fstab . /quadstor/etc/fstab.custom isn't used on Linux

On Fri, Oct 21, 2016 at 4:57 PM, Aleksey Maksimov
 wrote:
> Shall I create a file /quadstor/etc/fstab.custom ?
> Or just edit /etc/fstab ?
>
>
> ...
> # Mount QUADStor Virtual Disk /dev/quadstor/VDisk1 on /mnt/quadstor-vv1
> #
> /dev/quadstor/VDisk1 /mnt/quadstor-vv1/ ext4 defaults,nofail,noauto 0 0
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] QUADStor 3.2.10 - Statistics about deduplication (ext4, NFSv4)

2016-10-21 Thread QUADStor Support
The file system isn't aware of the deduplication that occurs
underneath. All it sees a physical disk. For example you can write
zeros to a file and fill up the3.4TB filesystem, but the Vdisk
deduplication would mean only a few MB or GB is used on the actual
physical disk.

https://groups.google.com/forum/#!topic/quadstor-virt/QQUCkhhsAYk is a
similar topic

On Fri, Oct 21, 2016 at 10:21 PM, Aleksey Maksimov
 wrote:
> Hello QUADStor guru`s!
>
>
> Fresh installed QUADStor 3.2.10
>
> I have created and mounted the virtual disk as ext4 partition.
> Then I exported this partition using NFSv4
> Then I started from NFS-client backup of KVM (oVirt) virtual machines in
> NFS-share
>
> VDisk Statisics from QUADStor web-console:
>
> Write Size: 33.85 GB
> Write Ops: 231954
> Read Size: 23.86 MB
> Read Ops: 4797
> Unaligned Size: 0.00 KB
> Data Deduped: 27.11 GB
> Dedupe Ratio: 5.020
> Data Unmapped: 3.638 TB
> Unmap Ops: 1907200
> Blocks Zeroed: 0.00 KB
> Uncompressed Size: 6.74 GB
> Compressed Size: 0.00 KB
>
> Based on these statistics, I expect to see what's disc used space is 6.74
> GB, but in fact here is what I see:
>
> # df -lah /mnt/quadstor-vv1
>
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdu3.6T   32G  3.4T   1% /mnt/quadstor-vv1
>
> # du -shc /mnt/quadstor-vv1/*
>
> 16K /mnt/quadstor-vv1/lost+found
> 4.0K/mnt/quadstor-vv1/ovirt-engine-backup
> 3.8G/mnt/quadstor-vv1/ovirt-iso-domain
> 29G /mnt/quadstor-vv1/ovirt-vm-backup
> 32G total
>
> Statistics about deduplication deceptive?
> Deduplication doesn't really work or I don't understand something?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] VDisk Mirroring Causing Kernel Panic

2016-11-29 Thread QUADStor Support
Check if there is a vmcore-dmesg.txt under /var/crash. If there is
one, please send it across. You can send it to supp...@quadstor.com

On Wed, Nov 30, 2016 at 1:25 AM, Randy Burkholder
 wrote:
> I'm running Quadstor Virt 3.2.10 on RHEL 7.3. When I try to mirror a vdisk
> using qmirror it is causing a kernel panic. The kernel is up-to-date
> according to yum and I have rebuilt the kernel modules using
> /quadstor/bin/builditf.
>
> I'm working on analyzing the kdump info but I was wondering if anyone else
> has experienced this issue?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] VDisk Mirroring Causing Kernel Panic

2016-12-03 Thread QUADStor Support
There was a memory double-free introduced in one of the earlier
releases when mirroring data blocks compressed on disk at the source
side.

This is fixed in 3.2.11 which can be downloaded from
www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html

On Wed, Nov 30, 2016 at 7:24 PM, Randy Burkholder
 wrote:
> Just sent across the diagnostic package which includes the vmware-dmesg. I
> can also send across the crash data if needed. As an update, this is
> occurring with large vdisks. I tried a 100GB vdisk and it worked without
> issue. I resized that same vdisk to 10TB, attempted to run qmirror, and had
> the kernel panic.
>
>
> On Tuesday, November 29, 2016 at 3:21:01 PM UTC-5, quadstor wrote:
>>
>> Check if there is a vmcore-dmesg.txt under /var/crash. If there is
>> one, please send it across. You can send it to supp...@quadstor.com
>>
>> On Wed, Nov 30, 2016 at 1:25 AM, Randy Burkholder
>>  wrote:
>> > I'm running Quadstor Virt 3.2.10 on RHEL 7.3. When I try to mirror a
>> > vdisk
>> > using qmirror it is causing a kernel panic. The kernel is up-to-date
>> > according to yum and I have rebuilt the kernel modules using
>> > /quadstor/bin/builditf.
>> >
>> > I'm working on analyzing the kdump info but I was wondering if anyone
>> > else
>> > has experienced this issue?
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-virt+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] OMG, I kinda fixed it!

2016-12-06 Thread QUADStor Support
Between 3.2.9 -> 3.2.11, nothing really changed in the IO path. Just a
few installation script changes and one memory corruption during
qmirror which isn't related to the issue you face. Please send us the
diagnostics from both nodes to supp...@quadstor.com

For a full resync, the only way as of now would be to delete the VDisk
on the destination and setup mirroring again.

On Tue, Dec 6, 2016 at 1:19 PM, Paul Reid  wrote:
> As I previously posted, I had a real problem while upgrading from 3.2.9 to
> 3.2.11 with mirrored vDisks, where my most important vDisk appeared to be
> completely lost after the first node was upgraded. I wasn't in the best mood
> after that, as you can probably imagine.
>
> But, I have gotten my data back and my VMs are booting again.
>
> I appear to have caused a split brain issue with the upgrade somehow,
> resulting in both nodes being slaves after I rolled back to 3.2.9.
>
> My ESXi hosts were talking to my first node, which has corrupted data. When
> I forced my hosts to talk to my second node, I was able to boot my VMs.
>
> I then rebooted my first node, and it came up and took the master role (I
> couldn't get it to switch with qsync - it kept reporting an ioctl error).
>
> So, things look good - but I'm worried about the data my first node has,
> which is obviously garbage data now, so I shut it down so it doesn't screw
> up my good data. My second node switched to master, like it should, and is
> now running everything.
>
> Both my ESXi hosts were locked to my second node, so my first node wasn't
> getting any reads or writes from the host, so it shouldn't have messed up my
> second node's disks - I hope. It's alive right now - so far, so good. I will
> scan the disks inside the VMs to see if anything comes up looking like
> problems, just to be sure.
>
> In the meantime, I need to figure out what to do with my first node. What
> I'd like to do is blow away it's disks and make it resync from my second
> node - except I'm not sure how to do that at the moment. I am super happy my
> VMs and backups (to a second vDisk that I feared was damaged as well) aren't
> totally gone, though!
>
> So, my question now is: to reset my first node's disks back to blank and/or
> force it to resync from my second node, should I just delete it's vDisks and
> re-set them up? Or is there a command I can type to invalidate my first
> nodes disks, which will trigger a full resync from the other node?
>
> Thanks in advance!
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] OMG, I kinda fixed it!

2016-12-06 Thread QUADStor Support
On Tue, Dec 6, 2016 at 2:56 PM, Paul Reid  wrote:
> Thanks for the quick response.
>
> Yeah, I didn't think a whole lot changed, but something was unhappy during
> the upgrade. I did have problems getting 3.2.9 uninstalled (I always seem to
> have to run the uninstall command twice to get it to go - the first time
> spits out an error with every version I've upgraded so far). Then I had to
> fiddle with the DNS settings of CentOS 7 because I had changed them from my
> temporary ones to my production ones, which required a reboot to enable so I
> could download the 3.2.11 RPM. Once I installed the new RPM and the
> synchronization started, everything got very unhappy, with VMs freezing. The
> synchronization carried on, even after the VMs all froze, and seemed to take
> longer than I expected, considering the host was doing virtually nothing and
> the QuadStor node was only down long enough to reboot and install. I
> probably screwed something up that I can't think of right now - it'll come
> to me in my sleep, or something, maybe.

The uninstall failure was the one fixed in 3.2.10. The resync
shouldn't have taken much time and something over there needs fixing.

> So, to get the full resync done, I bring up my corrupted node with the
> virtual NICs disconnected, so it can't talk to my other node, delete my
> vDisks, reconnect my virtual NICs, then recreate my vDisks, and the good
> node should mirror everything to my previously problem node, and I should be
> good to go. Does that sound right?

You do not need to recreate the vdisk after deleting it. Once
mirroring is setup, the source node will create the vdisk on the
destination.

> I did scan my VMFS volumes for corruption with VOMA, and everything looks
> ok. I found issues with the boot drives of a couple of VMs, so I turfed them
> and am no restoring them from backups, just to be sure all is well.
>
> I will send the diagnostics as soon as I get my first node booted.

Ok, thanks. We need to check for the logs during the time the resync was running

> On Tuesday, December 6, 2016 at 1:14:54 AM UTC-8, quadstor wrote:
>>
>> Between 3.2.9 -> 3.2.11, nothing really changed in the IO path. Just a
>> few installation script changes and one memory corruption during
>> qmirror which isn't related to the issue you face. Please send us the
>> diagnostics from both nodes to sup...@quadstor.com
>>
>> For a full resync, the only way as of now would be to delete the VDisk
>> on the destination and setup mirroring again.
>>
>> On Tue, Dec 6, 2016 at 1:19 PM, Paul Reid  wrote:
>> > As I previously posted, I had a real problem while upgrading from 3.2.9
>> > to
>> > 3.2.11 with mirrored vDisks, where my most important vDisk appeared to
>> > be
>> > completely lost after the first node was upgraded. I wasn't in the best
>> > mood
>> > after that, as you can probably imagine.
>> >
>> > But, I have gotten my data back and my VMs are booting again.
>> >
>> > I appear to have caused a split brain issue with the upgrade somehow,
>> > resulting in both nodes being slaves after I rolled back to 3.2.9.
>> >
>> > My ESXi hosts were talking to my first node, which has corrupted data.
>> > When
>> > I forced my hosts to talk to my second node, I was able to boot my VMs.
>> >
>> > I then rebooted my first node, and it came up and took the master role
>> > (I
>> > couldn't get it to switch with qsync - it kept reporting an ioctl
>> > error).
>> >
>> > So, things look good - but I'm worried about the data my first node has,
>> > which is obviously garbage data now, so I shut it down so it doesn't
>> > screw
>> > up my good data. My second node switched to master, like it should, and
>> > is
>> > now running everything.
>> >
>> > Both my ESXi hosts were locked to my second node, so my first node
>> > wasn't
>> > getting any reads or writes from the host, so it shouldn't have messed
>> > up my
>> > second node's disks - I hope. It's alive right now - so far, so good. I
>> > will
>> > scan the disks inside the VMs to see if anything comes up looking like
>> > problems, just to be sure.
>> >
>> > In the meantime, I need to figure out what to do with my first node.
>> > What
>> > I'd like to do is blow away it's disks and make it resync from my second
>> > node - except I'm not sure how to do that at the moment. I am super
>> > happy my
>> > VMs and backups (to a second vDisk that I feared was damaged as well)
>> > aren't
>> > totally gone, though!
>> >
>> > So, my question now is: to reset my first node's disks back to blank
>> > and/or
>> > force it to resync from my second node, should I just delete it's vDisks
>> > and
>> > re-set them up? Or is there a command I can type to invalidate my first
>> > nodes disks, which will trigger a full resync from the other node?
>> >
>> > Thanks in advance!
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop 

Re: [quadstor-virt] OMG, I kinda fixed it!

2016-12-06 Thread QUADStor Support
The tadttr thread seems to wait for the peer to resync. You can try
the following

On the node which is causing an issue
/sbin/chkconfig quadtsor off
restart

On the node which is fine, go the mirroring settings for a vdisk and
delete the mirroring setup.

Then on the node which is causing an issue
chkconfig quadstor on
service start quadstor

You should now be able to delete the vdisk.


On Wed, Dec 7, 2016 at 6:36 AM, Paul Reid  wrote:
> I seem to be having some difficulty removing the vDisk from the node with
> the corrupted disk. When I attempt to remove the vDisk, the web interface
> stops responding. I am seeing "task tdatt0:4954 blocked for more than 120
> seconds" on the QuadStor console, even before I try to remove the vDisk.
>
> Is there another way to remove the vDisk, or should I be going down the road
> of a reinstall to get where I need to go?
>
> Note that the QuadStor node is isolated from my good QuadStor node, so it
> can't try to synchronize and risk damaging my good node. Maybe that's
> related.
>
> On Tuesday, December 6, 2016 at 3:04:09 AM UTC-8, quadstor wrote:
>>
>> On Tue, Dec 6, 2016 at 2:56 PM, Paul Reid  wrote:
>> > Thanks for the quick response.
>> >
>> > Yeah, I didn't think a whole lot changed, but something was unhappy
>> > during
>> > the upgrade. I did have problems getting 3.2.9 uninstalled (I always
>> > seem to
>> > have to run the uninstall command twice to get it to go - the first time
>> > spits out an error with every version I've upgraded so far). Then I had
>> > to
>> > fiddle with the DNS settings of CentOS 7 because I had changed them from
>> > my
>> > temporary ones to my production ones, which required a reboot to enable
>> > so I
>> > could download the 3.2.11 RPM. Once I installed the new RPM and the
>> > synchronization started, everything got very unhappy, with VMs freezing.
>> > The
>> > synchronization carried on, even after the VMs all froze, and seemed to
>> > take
>> > longer than I expected, considering the host was doing virtually nothing
>> > and
>> > the QuadStor node was only down long enough to reboot and install. I
>> > probably screwed something up that I can't think of right now - it'll
>> > come
>> > to me in my sleep, or something, maybe.
>>
>> The uninstall failure was the one fixed in 3.2.10. The resync
>> shouldn't have taken much time and something over there needs fixing.
>>
>> > So, to get the full resync done, I bring up my corrupted node with the
>> > virtual NICs disconnected, so it can't talk to my other node, delete my
>> > vDisks, reconnect my virtual NICs, then recreate my vDisks, and the good
>> > node should mirror everything to my previously problem node, and I
>> > should be
>> > good to go. Does that sound right?
>>
>> You do not need to recreate the vdisk after deleting it. Once
>> mirroring is setup, the source node will create the vdisk on the
>> destination.
>>
>> > I did scan my VMFS volumes for corruption with VOMA, and everything
>> > looks
>> > ok. I found issues with the boot drives of a couple of VMs, so I turfed
>> > them
>> > and am no restoring them from backups, just to be sure all is well.
>> >
>> > I will send the diagnostics as soon as I get my first node booted.
>>
>> Ok, thanks. We need to check for the logs during the time the resync was
>> running
>>
>> > On Tuesday, December 6, 2016 at 1:14:54 AM UTC-8, quadstor wrote:
>> >>
>> >> Between 3.2.9 -> 3.2.11, nothing really changed in the IO path. Just a
>> >> few installation script changes and one memory corruption during
>> >> qmirror which isn't related to the issue you face. Please send us the
>> >> diagnostics from both nodes to sup...@quadstor.com
>> >>
>> >> For a full resync, the only way as of now would be to delete the VDisk
>> >> on the destination and setup mirroring again.
>> >>
>> >> On Tue, Dec 6, 2016 at 1:19 PM, Paul Reid  wrote:
>> >> > As I previously posted, I had a real problem while upgrading from
>> >> > 3.2.9
>> >> > to
>> >> > 3.2.11 with mirrored vDisks, where my most important vDisk appeared
>> >> > to
>> >> > be
>> >> > completely lost after the first node was upgraded. I wasn't in the
>> >> > best
>> >> > mood
>> >> > after that, as you can probably imagine.
>> >> >
>> >> > But, I have gotten my data back and my VMs are booting again.
>> >> >
>> >> > I appear to have caused a split brain issue with the upgrade somehow,
>> >> > resulting in both nodes being slaves after I rolled back to 3.2.9.
>> >> >
>> >> > My ESXi hosts were talking to my first node, which has corrupted
>> >> > data.
>> >> > When
>> >> > I forced my hosts to talk to my second node, I was able to boot my
>> >> > VMs.
>> >> >
>> >> > I then rebooted my first node, and it came up and took the master
>> >> > role
>> >> > (I
>> >> > couldn't get it to switch with qsync - it kept reporting an ioctl
>> >> > error).
>> >> >
>> >> > So, things look good - but I'm worried about the data my first node
>> >> > has,
>> >> > which is obviously garbage data now, so I shut

Re: [quadstor-virt] FC Synchronous

2016-12-15 Thread QUADStor Support
As of now its IP v4 based. There aren't any plans yet for replication
over FC although there was some initial work done on this

On Thu, Dec 15, 2016 at 9:21 PM,   wrote:
> Hi All,
>
> Wanted to test my fcip setup, can we replicate between 2 Quadstor host using
> FC?
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] OMG, I kinda fixed it!

2016-12-20 Thread QUADStor Support
Could you send us the diagnostics logs from both the nodes to us at
supp...@quadstor.com . Also the full /var/log/messages from both
systems. We still need to determine why the upgrade failed and have no
luck so far at our end.

On Wed, Dec 7, 2016 at 9:53 AM, QUADStor Support  wrote:
> The tadttr thread seems to wait for the peer to resync. You can try
> the following
>
> On the node which is causing an issue
> /sbin/chkconfig quadtsor off
> restart
>
> On the node which is fine, go the mirroring settings for a vdisk and
> delete the mirroring setup.
>
> Then on the node which is causing an issue
> chkconfig quadstor on
> service start quadstor
>
> You should now be able to delete the vdisk.
>
>
> On Wed, Dec 7, 2016 at 6:36 AM, Paul Reid  wrote:
>> I seem to be having some difficulty removing the vDisk from the node with
>> the corrupted disk. When I attempt to remove the vDisk, the web interface
>> stops responding. I am seeing "task tdatt0:4954 blocked for more than 120
>> seconds" on the QuadStor console, even before I try to remove the vDisk.
>>
>> Is there another way to remove the vDisk, or should I be going down the road
>> of a reinstall to get where I need to go?
>>
>> Note that the QuadStor node is isolated from my good QuadStor node, so it
>> can't try to synchronize and risk damaging my good node. Maybe that's
>> related.
>>
>> On Tuesday, December 6, 2016 at 3:04:09 AM UTC-8, quadstor wrote:
>>>
>>> On Tue, Dec 6, 2016 at 2:56 PM, Paul Reid  wrote:
>>> > Thanks for the quick response.
>>> >
>>> > Yeah, I didn't think a whole lot changed, but something was unhappy
>>> > during
>>> > the upgrade. I did have problems getting 3.2.9 uninstalled (I always
>>> > seem to
>>> > have to run the uninstall command twice to get it to go - the first time
>>> > spits out an error with every version I've upgraded so far). Then I had
>>> > to
>>> > fiddle with the DNS settings of CentOS 7 because I had changed them from
>>> > my
>>> > temporary ones to my production ones, which required a reboot to enable
>>> > so I
>>> > could download the 3.2.11 RPM. Once I installed the new RPM and the
>>> > synchronization started, everything got very unhappy, with VMs freezing.
>>> > The
>>> > synchronization carried on, even after the VMs all froze, and seemed to
>>> > take
>>> > longer than I expected, considering the host was doing virtually nothing
>>> > and
>>> > the QuadStor node was only down long enough to reboot and install. I
>>> > probably screwed something up that I can't think of right now - it'll
>>> > come
>>> > to me in my sleep, or something, maybe.
>>>
>>> The uninstall failure was the one fixed in 3.2.10. The resync
>>> shouldn't have taken much time and something over there needs fixing.
>>>
>>> > So, to get the full resync done, I bring up my corrupted node with the
>>> > virtual NICs disconnected, so it can't talk to my other node, delete my
>>> > vDisks, reconnect my virtual NICs, then recreate my vDisks, and the good
>>> > node should mirror everything to my previously problem node, and I
>>> > should be
>>> > good to go. Does that sound right?
>>>
>>> You do not need to recreate the vdisk after deleting it. Once
>>> mirroring is setup, the source node will create the vdisk on the
>>> destination.
>>>
>>> > I did scan my VMFS volumes for corruption with VOMA, and everything
>>> > looks
>>> > ok. I found issues with the boot drives of a couple of VMs, so I turfed
>>> > them
>>> > and am no restoring them from backups, just to be sure all is well.
>>> >
>>> > I will send the diagnostics as soon as I get my first node booted.
>>>
>>> Ok, thanks. We need to check for the logs during the time the resync was
>>> running
>>>
>>> > On Tuesday, December 6, 2016 at 1:14:54 AM UTC-8, quadstor wrote:
>>> >>
>>> >> Between 3.2.9 -> 3.2.11, nothing really changed in the IO path. Just a
>>> >> few installation script changes and one memory corruption during
>>> >> qmirror which isn't related to the issue you face. Please send us the
>>> >> diagnostics from both nodes to sup...@quadstor.com
>>> >>
>>> >> For a full resync, the only way as 

Re: [quadstor-virt] Fencing with physical machines

2017-01-27 Thread QUADStor Support
The fence command usage for ILO isn't different from the other examples

/quadstor/bin/qmirrorcheck -a -t fence -r 10.0.13.151 -v
'/usr/sbin/fence_ilo -a 10.0.13.151  -l Admin -p Password'

Where 10.0.13.151 is the node we are trying to fence. You can test if
the fence command is communicating correctly with by sending '-o
status' to fence_ilo. (Note that its most likely
fence_ilo2/fence_ilo3/fence_ilo4)


On Wed, Jan 25, 2017 at 12:58 AM, Fabien Rouach  wrote:
> Hi,
> I'm actually building a test lab with Quadstor storage virtualisation.
> Got 2 node (physicals machine in HA mirroring
>
> Would like to config fencing but all exemple i found used VM nodes, and part
> of the setup is done on the VM host.
> Any guide / exemple available on how to do it with physical machine.
>
> I'd like to fence using network interface dedicated to replication.
> Optionally, I'like to fence using iLo interfaces (they are HP machines),
> seen it's possible but seams complicated, anyone had already done that?
>
> Is it possible to fence on 2 "criteria" (replication interface not beeing
> reachable + iLo ?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Fencing with physical machines

2017-01-30 Thread QUADStor Support
The criteria for fencing a peer node is if a command in a write
sequence has timed out. At this point to continue with the current
write command is to fence the peer node to prevent any further changes
w.r.t to the write command, and then it is assumed to be safe to
continue with the write command.

Ofcourse it could be very well the case that the peer node is healthy
and the problem is in the network between the two. But even in that
case the only way to ensure continuity is fence the peer node and let
the current node handle the current and future writes.

Given the above, there isn't a good way of doing below (the keep mirroring part)

"Ilo network is down or saying server is OFF  / remote node services
are up (got a dedicated direct link for replication that could be used
to verify that)-> keep mirroring and don't takeover ownership"

Mirroring can only be synchronous if the writes complete successfully
at both the ends

The qmirrorcheck command does not necessarily have to be fence_ilo. It
can be a script which executes fence_ilo and evaluates various
conditions before determining a fence as successful or not. Also if
one fence (qmirrorcheck) command fails and if there are others
specified then they are also executed. The order is one listed by
qmirrorcheck. Maybe we should add an ordering here so that the fence
commands are always tried in the same order across reboots. Each fence
command is executed till one succeeds (an exit status of 0) If all
fence commands fails then the current writes and future writes are
failed till there is a reconnect to the peer node when the node or
network is up (The reconnect isn't automatic and occurs only on the
peer quadstor service startup)


On Tue, Jan 31, 2017 at 12:42 AM, Fabien Rouach  wrote:
> Thanks,
> Was easier than expected
>
> Now can i fence on several "criterias" and add some logic?
> I'd like to check if remote node Quadstor Service is running
>
> So i could evaluate cases like:
> Ilo network is down or saying server is OFF  / remote node services are up
> (got a dedicated direct link for replication that could be used to verify
> that)-> keep mirroring and don't takeover ownership
> Ilo network return server is ON  / remote node services are down -> takeover
> ownership
>
>
> Le vendredi 27 janvier 2017 13:12:21 UTC-5, quadstor a écrit :
>>
>> The fence command usage for ILO isn't different from the other examples
>>
>> /quadstor/bin/qmirrorcheck -a -t fence -r 10.0.13.151 -v
>> '/usr/sbin/fence_ilo -a 10.0.13.151  -l Admin -p Password'
>>
>> Where 10.0.13.151 is the node we are trying to fence. You can test if
>> the fence command is communicating correctly with by sending '-o
>> status' to fence_ilo. (Note that its most likely
>> fence_ilo2/fence_ilo3/fence_ilo4)
>>
>>
>> On Wed, Jan 25, 2017 at 12:58 AM, Fabien Rouach 
>> wrote:
>> > Hi,
>> > I'm actually building a test lab with Quadstor storage virtualisation.
>> > Got 2 node (physicals machine in HA mirroring
>> >
>> > Would like to config fencing but all exemple i found used VM nodes, and
>> > part
>> > of the setup is done on the VM host.
>> > Any guide / exemple available on how to do it with physical machine.
>> >
>> > I'd like to fence using network interface dedicated to replication.
>> > Optionally, I'like to fence using iLo interfaces (they are HP machines),
>> > seen it's possible but seams complicated, anyone had already done that?
>> >
>> > Is it possible to fence on 2 "criteria" (replication interface not
>> > beeing
>> > reachable + iLo ?
>> >
>> > Thanks
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Fencing with physical machines

2017-02-01 Thread QUADStor Support
On Tue, Jan 31, 2017 at 7:49 PM, Fabien Rouach  wrote:
> I start to understand better how it works.
> I should have described my test infra and what are my goals, would have been
> more clear that what I've wrote.
> So, I have to nodes (physical machines)
> Both have 3 networks
> - ILo
> - Management (to access web interface)
> - Replication (for the mirroring, direct network cable between the 2 nodes)
> Vdisk are accessed throught Fibre channel by vmware infra
>
> Here are the cases I wanna deal with:
> - 2 nodes are up / quadstor service up / vdisk OK but they can't communicate
> throught network (network or card issue)
>  I don't want to end up with neither of node fencing, refusing write ->
> storage down
>  I don't want to end up with 2 nodes fencing, both thinking they are
> masters, accepting writes request ending with data inconsistency
A reliable fencing framework will not have false positives.  So if the
fence command returns a success then the other node is considered to
be down.

The first case you mention is possible if the fence command fails. In
such a case the node trying to fence cannot do much since it is not
sure if the peer is down or active. So whether or not its master or
slave it needs to start failing the writes. For a vdisk in a slave
role to become master the fence command need to be successful.

On the other hand, the second case cannot occur. The worst that can
happen is that the two node might fence each other, in this case based
on the fence command (power down/power off sequence or a reset), when
the two nodes are brought back online and the quadstor service are
started each of the nodes negotiate their role with one taking over
the master role and the other the slave role.

> - 1 node is down (no power, no ILo), the second can't fence and stop
> accepting writes request -> storage down
Such a scenario will require manual intervention

> Last case, more rare, but got to think about it:
> Let say I have 2 node N1 and N2
> both have 2 storage pool P1 and P2 (I will have different pools, with
> differents type of disks and perf levels)
> and 2 Virtual Disk D1 (on pool P1) and D2 (on pool P2) in sync mirroring
>
> P1 is down on N1 and P2 is down on N2 (ie RAID failed)
> In that case, i can't fence by rebooting the other node

This isn't a case for fencing. Synchronous mirroring requires that a
write required at one end be successfully written on both nodes, so
one vdisk state cannot move ahead of the other on the peer node. In
case of a RAID failure on on of the nodes, the write command
encounters a write error and not a network timeout. The error returned
to the host is a write failure again.


> Regarding that last case, when quadstor does the qmirrorcheck and fence, is
> it done for the one virtual disk / one storage pool or all virtual disks of
> the node?

Detection of a peer failure is done on a write. Reads are anycase
local (unless there is a sync occurring). So there can very well be
the case of vdisks which are not receiving any active write IO to be
unaware of the peer node failure, the peer node can come back online
and reconnect to the active node. But this is an interesting aspect
which we need to test further to see if there are any corner cases
which needs fixing.

>
> I will start working on scripts and will share them here, I guess I'm not
> the only one trying to replace high end SAN and need to reach same level of
> availability for the solution to be accepted.
>
>
>
>
> Le lundi 30 janvier 2017 14:54:57 UTC-5, quadstor a écrit :
>>
>> The criteria for fencing a peer node is if a command in a write
>> sequence has timed out. At this point to continue with the current
>> write command is to fence the peer node to prevent any further changes
>> w.r.t to the write command, and then it is assumed to be safe to
>> continue with the write command.
>>
>> Ofcourse it could be very well the case that the peer node is healthy
>> and the problem is in the network between the two. But even in that
>> case the only way to ensure continuity is fence the peer node and let
>> the current node handle the current and future writes.
>>
>> Given the above, there isn't a good way of doing below (the keep mirroring
>> part)
>>
>> "Ilo network is down or saying server is OFF  / remote node services
>> are up (got a dedicated direct link for replication that could be used
>> to verify that)-> keep mirroring and don't takeover ownership"
>>
>> Mirroring can only be synchronous if the writes complete successfully
>> at both the ends
>>
>> The qmirrorcheck command does not necessarily have to be fence_ilo. It
>> can be a script which executes fence_ilo and evaluates various
>> conditions before determining a fence as successful or not. Also if
>> one fence (qmirrorcheck) command fails and if there are others
>> specified then they are also executed. The order is one listed by
>> qmirrorcheck. Maybe we should add an ordering here so that the fence
>> commands are always tried in 

Re: [quadstor-virt] Boot from SAN with Quadstor not working

2017-03-03 Thread QUADStor Support
The kernel messages indicate that the initiator port is no longer
active (no further FC session from that side). So it seems that after
the reboot the initiator port is probably not connecting back to the
target.

Does fastutil list the LUN after the reboot and is it marked as the boot disk ?

On Fri, Mar 3, 2017 at 2:25 PM, Timo  wrote:
> Good morning,
>
> we made a new atempt with Quadstor on CentOS7 with latest 3.2.11.
> All worked well so far. Also the "fencing" with fcconfig that only the
> Client is able
> to see the FC LUN it should see.
>
> The vDISK are without any parameters (no D,E,C).
>
> In the client we made the QLA-Adapter bootable and assigned a LUN.
> When booting the client with Ubuntu oder CentOS CD for Install, all worked
> well.
> They can see the disk and install also worked fine.
>
> Yet when we try to reboot, nothing happens any more, except this message in
> /var/log/messages on the quadstor
>
> Mar  3 09:43:32 XXX kernel: rport-1:0-33: blocked FC remote port time out:
> removing rport
> Mar  3 09:43:32 XXX kernel: rport-7:0-33: blocked FC remote port time out:
> removing rport
>
> And then nothing. It does not boot.
>
> Any advice? What are we doing wrong?
> Client and Server are both Dell R710 with QLA2562 on two fc fabrics with
> Qlogic SANboxes.
>
> Any help would be welcome.
>
> Thx!
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Physical Storage Missing, vDisk not in config but presented as mount point

2017-03-06 Thread QUADStor Support
Please send the diagnostics file (HTML UI -> Physical Storage ->Run
Diagnostics) to supp...@quadstor.com

On Mon, Mar 6, 2017 at 3:11 PM, Gary Eastwood  wrote:
> Good Morning,
> Set everything up on Friday, configured a 15TB disk in VMware which was
> configured as the physical storage for data through web UI.
> I then created a 10TB vDisk called vDisk1 and setup the vDisk to be mounted
> to /mnt/VeeamData.
>
> Everything looked good so in Veeam, added the mount point as a backup
> repository and attempted a backup copy job.
>
> I first received the following error while attempting a backup copy job:
>
> 03/03/2017 15:18:20 :: Error: SaveFileContent failed
>
>
> This morning I have come in and checked things to continue to troubleshoot
> and the Physical Storage tab has None where the disk was once configured. As
> the web UI shows nothing, if I check the vDisk config it has the following
> output:
>
> [root@prod-dc1-quadstor bin]# ./vdconfig -l
> Name Pool Serial NumberSize(GB) LUN   Status
>
> However, fdisk -l shows the following:
>
>
> Disk /dev/sdc: 10995.1 GB, 10995116277760 bytes, 21474836480 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 524288 bytes
> Disk label type: gpt
>
>
> # Start  EndSize  TypeName
>  1 2048  195312496639.1T  Microsoft basic primary
>
> So it appears it is present but not at the same time?
>
> Any help you can offer would be appreciated!
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Physical Storage Missing, vDisk not in config but presented as mount point

2017-03-06 Thread QUADStor Support
What version are you running. This should have been fixed since 3.2.10

On Mon, Mar 6, 2017 at 4:16 PM, Gary Eastwood  wrote:
> Just an update on this
>
> I have found that the vDisk is not auto mounting correctly, so when I ran a
> backup it filled up the OS disk and broke things.
>
> After getting back from that, I've gone through and checked things and I
> have fstab setup as per the documentation but it still doesn't seem to auto
> mount the vDisk.
>
> vi /etc/fstab
> /dev/quadstor/vDisk1 /mnt/VeeamData xfs defaults,nofail,noauto 0 0
>
> From what I understand, automount is disabled in fstab, quadstor then checks
> and sees the entry and mounts the vDisk once the daemon has started, however
> this doesn't seem to be happening?
>
> On Monday, 6 March 2017 10:05:18 UTC, quadstor wrote:
>>
>> Please send the diagnostics file (HTML UI -> Physical Storage ->Run
>> Diagnostics) to sup...@quadstor.com
>>
>> On Mon, Mar 6, 2017 at 3:11 PM, Gary Eastwood  wrote:
>> > Good Morning,
>> > Set everything up on Friday, configured a 15TB disk in VMware which was
>> > configured as the physical storage for data through web UI.
>> > I then created a 10TB vDisk called vDisk1 and setup the vDisk to be
>> > mounted
>> > to /mnt/VeeamData.
>> >
>> > Everything looked good so in Veeam, added the mount point as a backup
>> > repository and attempted a backup copy job.
>> >
>> > I first received the following error while attempting a backup copy job:
>> >
>> > 03/03/2017 15:18:20 :: Error: SaveFileContent failed
>> >
>> >
>> > This morning I have come in and checked things to continue to
>> > troubleshoot
>> > and the Physical Storage tab has None where the disk was once
>> > configured. As
>> > the web UI shows nothing, if I check the vDisk config it has the
>> > following
>> > output:
>> >
>> > [root@prod-dc1-quadstor bin]# ./vdconfig -l
>> > Name Pool Serial NumberSize(GB) LUN   Status
>> >
>> > However, fdisk -l shows the following:
>> >
>> >
>> > Disk /dev/sdc: 10995.1 GB, 10995116277760 bytes, 21474836480 sectors
>> > Units = sectors of 1 * 512 = 512 bytes
>> > Sector size (logical/physical): 512 bytes / 512 bytes
>> > I/O size (minimum/optimal): 512 bytes / 524288 bytes
>> > Disk label type: gpt
>> >
>> >
>> > # Start  EndSize  TypeName
>> >  1 2048  195312496639.1T  Microsoft basic primary
>> >
>> > So it appears it is present but not at the same time?
>> >
>> > Any help you can offer would be appreciated!
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Physical Storage Missing, vDisk not in config but presented as mount point

2017-03-06 Thread QUADStor Support
Does this work ?

mount /dev/quadstor/vDisk1

That is executed in /quadstor/etc/quadstor.init script as seen below

udevadm settle
for i in `/quadstor/bin/vdconfig -l | sed -n '1!p' | awk
'{print $1}'`; do
stat /dev/quadstor/$i > /dev/null 2>&1
if [ "$?" != "0" ]; then
logger -s "WARN: Cannot yet find symlink
/dev/quadstor/$i"
continue
fi

mount /dev/quadstor/$i > /dev/null 2>&1
done


/dev/quadstor is created by an udev script and if this is
not present by this time (which it should) , you should see the
warning "Cannot yet find symlink ..." in /var/log/messages


On Mon, Mar 6, 2017 at 10:37 PM, Gary Eastwood  wrote:
> Currently running version 3.2.11.
> Have attached support bundle from System page
>
> On Monday, 6 March 2017 16:56:36 UTC, quadstor wrote:
>>
>> What version are you running. This should have been fixed since 3.2.10
>>
>> On Mon, Mar 6, 2017 at 4:16 PM, Gary Eastwood  wrote:
>> > Just an update on this
>> >
>> > I have found that the vDisk is not auto mounting correctly, so when I
>> > ran a
>> > backup it filled up the OS disk and broke things.
>> >
>> > After getting back from that, I've gone through and checked things and I
>> > have fstab setup as per the documentation but it still doesn't seem to
>> > auto
>> > mount the vDisk.
>> >
>> > vi /etc/fstab
>> > /dev/quadstor/vDisk1 /mnt/VeeamData xfs defaults,nofail,noauto 0 0
>> >
>> > From what I understand, automount is disabled in fstab, quadstor then
>> > checks
>> > and sees the entry and mounts the vDisk once the daemon has started,
>> > however
>> > this doesn't seem to be happening?
>> >
>> > On Monday, 6 March 2017 10:05:18 UTC, quadstor wrote:
>> >>
>> >> Please send the diagnostics file (HTML UI -> Physical Storage ->Run
>> >> Diagnostics) to sup...@quadstor.com
>> >>
>> >> On Mon, Mar 6, 2017 at 3:11 PM, Gary Eastwood 
>> >> wrote:
>> >> > Good Morning,
>> >> > Set everything up on Friday, configured a 15TB disk in VMware which
>> >> > was
>> >> > configured as the physical storage for data through web UI.
>> >> > I then created a 10TB vDisk called vDisk1 and setup the vDisk to be
>> >> > mounted
>> >> > to /mnt/VeeamData.
>> >> >
>> >> > Everything looked good so in Veeam, added the mount point as a backup
>> >> > repository and attempted a backup copy job.
>> >> >
>> >> > I first received the following error while attempting a backup copy
>> >> > job:
>> >> >
>> >> > 03/03/2017 15:18:20 :: Error: SaveFileContent failed
>> >> >
>> >> >
>> >> > This morning I have come in and checked things to continue to
>> >> > troubleshoot
>> >> > and the Physical Storage tab has None where the disk was once
>> >> > configured. As
>> >> > the web UI shows nothing, if I check the vDisk config it has the
>> >> > following
>> >> > output:
>> >> >
>> >> > [root@prod-dc1-quadstor bin]# ./vdconfig -l
>> >> > Name Pool Serial NumberSize(GB) LUN   Status
>> >> >
>> >> > However, fdisk -l shows the following:
>> >> >
>> >> >
>> >> > Disk /dev/sdc: 10995.1 GB, 10995116277760 bytes, 21474836480 sectors
>> >> > Units = sectors of 1 * 512 = 512 bytes
>> >> > Sector size (logical/physical): 512 bytes / 512 bytes
>> >> > I/O size (minimum/optimal): 512 bytes / 524288 bytes
>> >> > Disk label type: gpt
>> >> >
>> >> >
>> >> > # Start  EndSize  TypeName
>> >> >  1 2048  195312496639.1T  Microsoft basic primary
>> >> >
>> >> > So it appears it is present but not at the same time?
>> >> >
>> >> > Any help you can offer would be appreciated!
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "QUADStor Storage Virtualization" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to quadstor-vir...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Building kernel modules failed!

2017-03-07 Thread QUADStor Support
Kernel version > 4.0 isn't yet supported. We hope to have this ready
in a week or two.

On Mon, Mar 6, 2017 at 5:58 PM, Duh  wrote:
> Hello, module build failed on proxmox-ve host based on debian jessie
>
> root@b11:~# cat /etc/debian_version
> 8.7
> root@b11:~# uname -a
> Linux b11 4.4.44-1-pve #1 SMP PVE 4.4.44-83 (Wed, 1 Mar 2017 09:22:35 +0100)
> x86_64 GNU/Linux
> root@b11:~# dpkg -l | grep pve-headers
> ii  pve-headers-4.4.44-1-pve 4.4.44-83
> amd64The Proxmox PVE Kernel Headers
> root@b11:~# dpkg -l | grep pve-kernel
> ii  pve-firmware 1.1-11 all
> Binary firmware code for the pve-kernel
> ii  pve-kernel-4.4.44-1-pve  4.4.44-83
> amd64The Proxmox PVE Kernel Image
> root@b11:~# dpkg -l | grep quadstor
> ii  quadstor-virt3.2.11
> amd64QUADStor storage virtualization enterprise edition
>
> pve-kernel and headers are based on Ubuntu-4.4.0-63.84 packages
>
> build log follows...
>
> + [ ! -f /quadstor/lib/modules/corelib.o ]
> + uname
> + os=Linux
> + cd /quadstor/src/export
> + make clean
> rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d Module.*
> + make
> cp /quadstor/lib/modules/corelib.o /quadstor/src/export
> make -C /lib/modules/4.4.44-1-pve/build SUBDIRS=/quadstor/src/export modules
> make[1]: Entering directory '/usr/src/linux-headers-4.4.44-1-pve'
>  CC [M]  /quadstor/src/export/ldev_linux.o
>  CC [M]  /quadstor/src/export/devq.o
>  LD [M]  /quadstor/src/export/ldev.o
>  CC [M]  /quadstor/src/export/core_linux.o
> In file included from include/scsi/scsi_cmnd.h:10:0,
> from /quadstor/src/export/core_linux.c:26:
> include/scsi/scsi_device.h:223:3: warning: ‘printk’ is an unrecognized
> format function type [-Wformat=]
>   const char *, ...);
>   ^
> include/scsi/scsi_device.h:229:40: warning: ‘printk’ is an unrecognized
> format function type [-Wformat=]
> scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
>^
> In file included from include/uapi/linux/in.h:23:0,
> from include/linux/in.h:23,
> from /quadstor/src/export/linuxdefs.h:18,
> from /quadstor/src/export/core_linux.c:19:
> /quadstor/src/export/core_linux.c: In function ‘sys_sock_create’:
> include/linux/socket.h:163:18: warning: passing argument 1 of
> ‘sock_create_kern’ makes pointer from integer without a cast
> #define AF_INET  2 /* Internet IP Protocol  */
>  ^
> /quadstor/src/export/core_linux.c:174:28: note: in expansion of macro
> ‘AF_INET’
>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
> &sys_sock->sock);
>^
> In file included from include/linux/skbuff.h:29:0,
> from include/linux/tcp.h:21,
> from /quadstor/src/export/linuxdefs.h:19,
> from /quadstor/src/export/core_linux.c:19:
> include/linux/net.h:216:5: note: expected ‘struct net *’ but argument is of
> type ‘int’
> int sock_create_kern(struct net *net, int family, int type, int proto,
> struct socket **res);
> ^
> /quadstor/src/export/core_linux.c:174:63: warning: passing argument 4 of
> ‘sock_create_kern’ makes integer from pointer without a cast
>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
> &sys_sock->sock);
>   ^
> In file included from include/linux/skbuff.h:29:0,
> from include/linux/tcp.h:21,
> from /quadstor/src/export/linuxdefs.h:19,
>  from /quadstor/src/export/core_linux.c:19:
> include/linux/net.h:216:5: note: expected ‘int’ but argument is of type
> ‘struct socket **’
> int sock_create_kern(struct net *net, int family, int type, int proto,
> struct socket **res);
> ^
> /quadstor/src/export/core_linux.c:174:11: error: too few arguments to
> function ‘sock_create_kern’
>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
> &sys_sock->sock);
>   ^
> In file included from include/linux/skbuff.h:29:0,
> from include/linux/tcp.h:21,
> from /quadstor/src/export/linuxdefs.h:19,
> from /quadstor/src/export/core_linux.c:19:
> include/linux/net.h:216:5: note: declared here
> int sock_create_kern(struct net *net, int family, int type, int proto,
> struct socket **res);
> ^
> /quadstor/src/export/core_linux.c: In function ‘g_new_bio’:
> /quadstor/src/export/core_linux.c:1127:17: warning: assignment from
> incompatible pointer type
>  bio->bi_end_io = bio_end_bio;
> ^
> /quadstor/src/export/core_linux.c: In function ‘bio_get_max_pages’:
> /quadstor/src/export/core_linux.c:1220:2: error: implicit declaration of
> function ‘bio_get_nr_vecs’ [-Werror=implicit-function-declaration]
>  return bio_get_nr_vecs(iodev);
>  ^
> cc1: some warnings being treated as errors
> scripts/Makefile.build:258: reci

Re: [quadstor-virt] Boot from SAN with Quadstor not working

2017-03-07 Thread QUADStor Support
Anything on the fcconfig side which might be causing this. Are the
vdisks emulating 512 byte sectors ?

Try clearing all fcconfig rules, restart the quadstor system, then try
a restart at the initiator side and let us know. There is a pending FC
update which might be available next week, but boot from SAN should
have worked with the current version also.

On Mon, Mar 6, 2017 at 4:47 PM, Timo  wrote:
> Hi,
>
> Am Freitag, 3. März 2017 20:00:11 UTC+1 schrieb quadstor:
>>
>> The kernel messages indicate that the initiator port is no longer
>> active (no further FC session from that side). So it seems that after
>> the reboot the initiator port is probably not connecting back to the
>> target.
>>
>> Does fastutil list the LUN after the reboot and is it marked as the boot
>> disk ?
>
> So bootet into fastutil. And indeed, it did not list the LUNs or anything
> related to Quadstor any more.
> Why?
>
> What is wrong?
> How can we fix that?
>
>
> We have another setup for a test with targetcli, there it seems to work.
> The hosts an FC are identically configured.
> Except the target comes from targetcli...
>
> Thx!
>
> Best,
>
> Timo
>
>>
>> On Fri, Mar 3, 2017 at 2:25 PM, Timo  wrote:
>> > Good morning,
>> >
>> > we made a new atempt with Quadstor on CentOS7 with latest 3.2.11.
>> > All worked well so far. Also the "fencing" with fcconfig that only the
>> > Client is able
>> > to see the FC LUN it should see.
>> >
>> > The vDISK are without any parameters (no D,E,C).
>> >
>> > In the client we made the QLA-Adapter bootable and assigned a LUN.
>> > When booting the client with Ubuntu oder CentOS CD for Install, all
>> > worked
>> > well.
>> > They can see the disk and install also worked fine.
>> >
>> > Yet when we try to reboot, nothing happens any more, except this message
>> > in
>> > /var/log/messages on the quadstor
>> >
>> > Mar  3 09:43:32 XXX kernel: rport-1:0-33: blocked FC remote port time
>> > out:
>> > removing rport
>> > Mar  3 09:43:32 XXX kernel: rport-7:0-33: blocked FC remote port time
>> > out:
>> > removing rport
>> >
>> > And then nothing. It does not boot.
>> >
>> > Any advice? What are we doing wrong?
>> > Client and Server are both Dell R710 with QLA2562 on two fc fabrics with
>> > Qlogic SANboxes.
>> >
>> > Any help would be welcome.
>> >
>> > Thx!
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Physical Storage Missing, vDisk not in config but presented as mount point

2017-03-08 Thread QUADStor Support
Please try the following
service stop quadstor
Add a 'set -x' in /quadstor/etc/quadstor.init right after the #!/bin/bash line

#!/bin/bash
set -x

/quadstor/etc/quadstor.init start > /tmp/start.out 2>&1

Please send us /tmp/start.out

On Tue, Mar 7, 2017 at 2:42 PM, Gary Eastwood  wrote:
> I am able to run the command
> mount /dev/quadstor/vDisk1 /mnt/VeeamData
>
> If I look at /var/log/messages I don't see any symlink warnings, they're all
> just events like
>
> Mar  6 11:57:17 prod-dc1-quadstor systemd: Started Session 32 of user root.
> Mar  6 11:57:17 prod-dc1-quadstor systemd-logind: New session 32 of user
> root.
>
> On Monday, 6 March 2017 17:30:59 UTC, quadstor wrote:
>>
>> Does this work ?
>>
>> mount /dev/quadstor/vDisk1
>>
>> That is executed in /quadstor/etc/quadstor.init script as seen below
>>
>> udevadm settle
>> for i in `/quadstor/bin/vdconfig -l | sed -n '1!p' | awk
>> '{print $1}'`; do
>> stat /dev/quadstor/$i > /dev/null 2>&1
>> if [ "$?" != "0" ]; then
>> logger -s "WARN: Cannot yet find symlink
>> /dev/quadstor/$i"
>> continue
>> fi
>>
>> mount /dev/quadstor/$i > /dev/null 2>&1
>> done
>>
>>
>> /dev/quadstor is created by an udev script and if this is
>> not present by this time (which it should) , you should see the
>> warning "Cannot yet find symlink ..." in /var/log/messages
>>
>>
>> On Mon, Mar 6, 2017 at 10:37 PM, Gary Eastwood  wrote:
>> > Currently running version 3.2.11.
>> > Have attached support bundle from System page
>> >
>> > On Monday, 6 March 2017 16:56:36 UTC, quadstor wrote:
>> >>
>> >> What version are you running. This should have been fixed since 3.2.10
>> >>
>> >> On Mon, Mar 6, 2017 at 4:16 PM, Gary Eastwood 
>> >> wrote:
>> >> > Just an update on this
>> >> >
>> >> > I have found that the vDisk is not auto mounting correctly, so when I
>> >> > ran a
>> >> > backup it filled up the OS disk and broke things.
>> >> >
>> >> > After getting back from that, I've gone through and checked things
>> >> > and I
>> >> > have fstab setup as per the documentation but it still doesn't seem
>> >> > to
>> >> > auto
>> >> > mount the vDisk.
>> >> >
>> >> > vi /etc/fstab
>> >> > /dev/quadstor/vDisk1 /mnt/VeeamData xfs defaults,nofail,noauto 0 0
>> >> >
>> >> > From what I understand, automount is disabled in fstab, quadstor then
>> >> > checks
>> >> > and sees the entry and mounts the vDisk once the daemon has started,
>> >> > however
>> >> > this doesn't seem to be happening?
>> >> >
>> >> > On Monday, 6 March 2017 10:05:18 UTC, quadstor wrote:
>> >> >>
>> >> >> Please send the diagnostics file (HTML UI -> Physical Storage ->Run
>> >> >> Diagnostics) to sup...@quadstor.com
>> >> >>
>> >> >> On Mon, Mar 6, 2017 at 3:11 PM, Gary Eastwood 
>> >> >> wrote:
>> >> >> > Good Morning,
>> >> >> > Set everything up on Friday, configured a 15TB disk in VMware
>> >> >> > which
>> >> >> > was
>> >> >> > configured as the physical storage for data through web UI.
>> >> >> > I then created a 10TB vDisk called vDisk1 and setup the vDisk to
>> >> >> > be
>> >> >> > mounted
>> >> >> > to /mnt/VeeamData.
>> >> >> >
>> >> >> > Everything looked good so in Veeam, added the mount point as a
>> >> >> > backup
>> >> >> > repository and attempted a backup copy job.
>> >> >> >
>> >> >> > I first received the following error while attempting a backup
>> >> >> > copy
>> >> >> > job:
>> >> >> >
>> >> >> > 03/03/2017 15:18:20 :: Error: SaveFileContent failed
>> >> >> >
>> >> >> >
>> >> >> > This morning I have come in and checked things to continue to
>> >> >> > troubleshoot
>> >> >> > and the Physical Storage tab has None where the disk was once
>> >> >> > configured. As
>> >> >> > the web UI shows nothing, if I check the vDisk config it has the
>> >> >> > following
>> >> >> > output:
>> >> >> >
>> >> >> > [root@prod-dc1-quadstor bin]# ./vdconfig -l
>> >> >> > Name Pool Serial NumberSize(GB) LUN   Status
>> >> >> >
>> >> >> > However, fdisk -l shows the following:
>> >> >> >
>> >> >> >
>> >> >> > Disk /dev/sdc: 10995.1 GB, 10995116277760 bytes, 21474836480
>> >> >> > sectors
>> >> >> > Units = sectors of 1 * 512 = 512 bytes
>> >> >> > Sector size (logical/physical): 512 bytes / 512 bytes
>> >> >> > I/O size (minimum/optimal): 512 bytes / 524288 bytes
>> >> >> > Disk label type: gpt
>> >> >> >
>> >> >> >
>> >> >> > # Start  EndSize  TypeName
>> >> >> >  1 2048  195312496639.1T  Microsoft basic primary
>> >> >> >
>> >> >> > So it appears it is present but not at the same time?
>> >> >> >
>> >> >> > Any help you can offer would be appreciated!
>> >> >> >
>> >> >> > --
>> >> >> > You received this message because you are subscribed to the Google
>> >> >> > Groups
>> >> >> > "QUADStor Storage Virtualization" group.
>> >> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> >> > send
>> >> >> 

Re: [quadstor-virt] Physical Storage Missing, vDisk not in config but presented as mount point

2017-03-14 Thread QUADStor Support
You need to add a 'set -x' in /quadstor/etc/quadstor.init right after
the #!/bin/bash line

#!/bin/bash (This is already present in the file)
set -x (add this)

But it probably ok.

There were other problems which caused the module load to fail, which
is probably why you were not able to connect to the GUI

+ /sbin/insmod /quadstor/lib/modules/3.10.0-514.6.2.el7.x86_64/coredev.ko
+ check_error 'Failed to insert core module'
+ '[' 1 '!=' 0 ']'
+ echo 'Failed to insert core module'
Failed to insert core module

Was there a recent kernel upgrade ? We suggest running the diagnostics
and send it to supp...@quadstor.com

The delay in the startup is because of the deduplication tables being
loaded into memory. Does a df now show the filesystem mounted ?



On Mon, Mar 13, 2017 at 6:21 PM, Gary Eastwood  wrote:
> Good Afternoon,
> Apologies for the delay, attached is the start.out file as requested. After
> putting those 2 lines in it looks like the service failed to start, not sure
> if that is expected.
>
> I changed the file back but now I can't get on the web interface, with the
> following message appearing:
>
> Gateway Timeout
>
> The gateway did not receive a timely response from the upstream server or
> application.
>
> I have restarted the server and this has also not helped.
>
> On Wed, Mar 8, 2017 at 7:01 PM, QUADStor Support 
> wrote:
>>
>> Please try the following
>> service stop quadstor
>> Add a 'set -x' in /quadstor/etc/quadstor.init right after the #!/bin/bash
>> line
>>
>> #!/bin/bash
>> set -x
>>
>> /quadstor/etc/quadstor.init start > /tmp/start.out 2>&1
>>
>> Please send us /tmp/start.out
>>
>> On Tue, Mar 7, 2017 at 2:42 PM, Gary Eastwood  wrote:
>> > I am able to run the command
>> > mount /dev/quadstor/vDisk1 /mnt/VeeamData
>> >
>> > If I look at /var/log/messages I don't see any symlink warnings, they're
>> > all
>> > just events like
>> >
>> > Mar  6 11:57:17 prod-dc1-quadstor systemd: Started Session 32 of user
>> > root.
>> > Mar  6 11:57:17 prod-dc1-quadstor systemd-logind: New session 32 of user
>> > root.
>> >
>> > On Monday, 6 March 2017 17:30:59 UTC, quadstor wrote:
>> >>
>> >> Does this work ?
>> >>
>> >> mount /dev/quadstor/vDisk1
>> >>
>> >> That is executed in /quadstor/etc/quadstor.init script as seen below
>> >>
>> >> udevadm settle
>> >> for i in `/quadstor/bin/vdconfig -l | sed -n '1!p' | awk
>> >> '{print $1}'`; do
>> >> stat /dev/quadstor/$i > /dev/null 2>&1
>> >> if [ "$?" != "0" ]; then
>> >> logger -s "WARN: Cannot yet find symlink
>> >> /dev/quadstor/$i"
>> >> continue
>> >> fi
>> >>
>> >> mount /dev/quadstor/$i > /dev/null 2>&1
>> >> done
>> >>
>> >>
>> >> /dev/quadstor is created by an udev script and if this is
>> >> not present by this time (which it should) , you should see the
>> >> warning "Cannot yet find symlink ..." in /var/log/messages
>> >>
>> >>
>> >> On Mon, Mar 6, 2017 at 10:37 PM, Gary Eastwood 
>> >> wrote:
>> >> > Currently running version 3.2.11.
>> >> > Have attached support bundle from System page
>> >> >
>> >> > On Monday, 6 March 2017 16:56:36 UTC, quadstor wrote:
>> >> >>
>> >> >> What version are you running. This should have been fixed since
>> >> >> 3.2.10
>> >> >>
>> >> >> On Mon, Mar 6, 2017 at 4:16 PM, Gary Eastwood 
>> >> >> wrote:
>> >> >> > Just an update on this
>> >> >> >
>> >> >> > I have found that the vDisk is not auto mounting correctly, so
>> >> >> > when I
>> >> >> > ran a
>> >> >> > backup it filled up the OS disk and broke things.
>> >> >> >
>> >> >> > After getting back from that, I've gone through and checked things
>> >> >> > and I
>> >> >> > have fstab setup as per the documentation but it still doesn't
>> >> >> > seem
>> >&

Re: [quadstor-virt] how to check qsync state

2017-03-17 Thread QUADStor Support
The sync status as a percentage will be added in the next release (in
a week or two)

On Wed, Mar 8, 2017 at 1:27 AM, Paul Reid  wrote:
> I know this is an ancient thread, but I'm still seeing the same issue on
> 3.2.9. Did this ever get sorted out? It'd be nice to know what percentage
> has been synchronized, so you can get some indication of whether it's going
> or not, and when it might be complete.
>
> Thanks!
>
> On Monday, September 2, 2013 at 11:28:56 AM UTC-7, quadstor wrote:
>>
>> On 9/2/13, Mac Linux  wrote:
>> > hello,
>> >
>> > I just updated my 2 node cluster to 3.0.46 and on one vdisk states
>> > "resync
>> > needed" on the command qsync -l.
>> >
>> > How can I check if it is ready or the percentage ?
>>
>> Usually the VDisk in the master role indicates whether sync has
>> started (Resyncing). However currently the status information is
>> inconvenient as very little detail of the progress is shown. Either
>> the states are Resync needed, Resyncing or Enabled ( Enabled when
>> resync completed successfully and disabled for any failure)
>>
>> We will be improving this in the coming weeks.
>>
>> > thanks
>> > mac
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] RDMA in QuadStor

2017-03-17 Thread QUADStor Support
Mirroring is only over IP (v4). Its unlikely that we will support
other interfaces in the near future. There is some initial work done
for mirroring over FC, if that is done then RDMA should be possible.
On a related note, the srpt driver is quite old and the newer releases
will unlikely have support for srpt for some time.

On Fri, Mar 17, 2017 at 6:29 PM, MDL  wrote:
> Hello,
>
> Is Quadstor currently able to leverage RoCE / RDMA for synchronous mirroring
> communications ?
>
> If yes, where can we find documentation to enable and test the functionality
> ?
>
> Thanks !
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Building kernel modules failed!

2017-03-28 Thread QUADStor Support
The next release is early next week. This should be fixed in that.

On Tue, Mar 28, 2017 at 2:48 PM, Gregory Motruk  wrote:
> hi, got same problem, any update?
> thanks!
>
> вторник, 7 марта 2017 г., 15:26:10 UTC+2 пользователь quadstor написал:
>>
>> Kernel version > 4.0 isn't yet supported. We hope to have this ready
>> in a week or two.
>>
>> On Mon, Mar 6, 2017 at 5:58 PM, Duh  wrote:
>> > Hello, module build failed on proxmox-ve host based on debian jessie
>> >
>> > root@b11:~# cat /etc/debian_version
>> > 8.7
>> > root@b11:~# uname -a
>> > Linux b11 4.4.44-1-pve #1 SMP PVE 4.4.44-83 (Wed, 1 Mar 2017 09:22:35
>> > +0100)
>> > x86_64 GNU/Linux
>> > root@b11:~# dpkg -l | grep pve-headers
>> > ii  pve-headers-4.4.44-1-pve 4.4.44-83
>> > amd64The Proxmox PVE Kernel Headers
>> > root@b11:~# dpkg -l | grep pve-kernel
>> > ii  pve-firmware 1.1-11
>> > all
>> > Binary firmware code for the pve-kernel
>> > ii  pve-kernel-4.4.44-1-pve  4.4.44-83
>> > amd64The Proxmox PVE Kernel Image
>> > root@b11:~# dpkg -l | grep quadstor
>> > ii  quadstor-virt3.2.11
>> > amd64QUADStor storage virtualization enterprise edition
>> >
>> > pve-kernel and headers are based on Ubuntu-4.4.0-63.84 packages
>> >
>> > build log follows...
>> >
>> > + [ ! -f /quadstor/lib/modules/corelib.o ]
>> > + uname
>> > + os=Linux
>> > + cd /quadstor/src/export
>> > + make clean
>> > rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d
>> > Module.*
>> > + make
>> > cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>> > make -C /lib/modules/4.4.44-1-pve/build SUBDIRS=/quadstor/src/export
>> > modules
>> > make[1]: Entering directory '/usr/src/linux-headers-4.4.44-1-pve'
>> >  CC [M]  /quadstor/src/export/ldev_linux.o
>> >  CC [M]  /quadstor/src/export/devq.o
>> >  LD [M]  /quadstor/src/export/ldev.o
>> >  CC [M]  /quadstor/src/export/core_linux.o
>> > In file included from include/scsi/scsi_cmnd.h:10:0,
>> > from /quadstor/src/export/core_linux.c:26:
>> > include/scsi/scsi_device.h:223:3: warning: ‘printk’ is an unrecognized
>> > format function type [-Wformat=]
>> >   const char *, ...);
>> >   ^
>> > include/scsi/scsi_device.h:229:40: warning: ‘printk’ is an unrecognized
>> > format function type [-Wformat=]
>> > scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
>> >^
>> > In file included from include/uapi/linux/in.h:23:0,
>> > from include/linux/in.h:23,
>> > from /quadstor/src/export/linuxdefs.h:18,
>> > from /quadstor/src/export/core_linux.c:19:
>> > /quadstor/src/export/core_linux.c: In function ‘sys_sock_create’:
>> > include/linux/socket.h:163:18: warning: passing argument 1 of
>> > ‘sock_create_kern’ makes pointer from integer without a cast
>> > #define AF_INET  2 /* Internet IP Protocol  */
>> >  ^
>> > /quadstor/src/export/core_linux.c:174:28: note: in expansion of macro
>> > ‘AF_INET’
>> >  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
>> > &sys_sock->sock);
>> >^
>> > In file included from include/linux/skbuff.h:29:0,
>> > from include/linux/tcp.h:21,
>> > from /quadstor/src/export/linuxdefs.h:19,
>> > from /quadstor/src/export/core_linux.c:19:
>> > include/linux/net.h:216:5: note: expected ‘struct net *’ but argument is
>> > of
>> > type ‘int’
>> > int sock_create_kern(struct net *net, int family, int type, int proto,
>> > struct socket **res);
>> > ^
>> > /quadstor/src/export/core_linux.c:174:63: warning: passing argument 4 of
>> > ‘sock_create_kern’ makes integer from pointer without a cast
>> >  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
>> > &sys_sock->sock);
>> >   ^
>> > In file included from include/linux/skbuff.h:29:0,
>> > from include/linux/tcp.h:21,
>> > from /quadstor/src/export/linuxdefs.h:19,
>> >  from /quadstor/src/export/core_linux.c:19:
>> > include/linux/net.h:216:5: note: expected ‘int’ but argument is of type
>> > ‘struct socket **’
>> > int sock_create_kern(struct net *net, int family, int type, int proto,
>> > struct socket **res);
>> > ^
>> > /quadstor/src/export/core_linux.c:174:11: error: too few arguments to
>> > function ‘sock_create_kern’
>> >  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
>> > &sys_sock->sock);
>> >   ^
>> > In file included from include/linux/skbuff.h:29:0,
>> > from include/linux/tcp.h:21,
>> > from /quadstor/src/export/linuxdefs.h:19,
>> > from /quadstor/src/export/core_linux.c:19:
>> > include/linux/net.h:216:5: note: declared here
>> > int sock_create_kern(struct net *net, int family, int type, int proto,
>> > struct socket **res);
>> >

Re: [quadstor-virt] Building kernel modules failed!

2017-04-10 Thread QUADStor Support
Sorry, the release has been delayed slightly. We expect the release to
be ready by the end of this week.

On Tue, Mar 28, 2017 at 3:27 PM, QUADStor Support  wrote:
> The next release is early next week. This should be fixed in that.
>
> On Tue, Mar 28, 2017 at 2:48 PM, Gregory Motruk  wrote:
>> hi, got same problem, any update?
>> thanks!
>>
>> вторник, 7 марта 2017 г., 15:26:10 UTC+2 пользователь quadstor написал:
>>>
>>> Kernel version > 4.0 isn't yet supported. We hope to have this ready
>>> in a week or two.
>>>
>>> On Mon, Mar 6, 2017 at 5:58 PM, Duh  wrote:
>>> > Hello, module build failed on proxmox-ve host based on debian jessie
>>> >
>>> > root@b11:~# cat /etc/debian_version
>>> > 8.7
>>> > root@b11:~# uname -a
>>> > Linux b11 4.4.44-1-pve #1 SMP PVE 4.4.44-83 (Wed, 1 Mar 2017 09:22:35
>>> > +0100)
>>> > x86_64 GNU/Linux
>>> > root@b11:~# dpkg -l | grep pve-headers
>>> > ii  pve-headers-4.4.44-1-pve 4.4.44-83
>>> > amd64The Proxmox PVE Kernel Headers
>>> > root@b11:~# dpkg -l | grep pve-kernel
>>> > ii  pve-firmware 1.1-11
>>> > all
>>> > Binary firmware code for the pve-kernel
>>> > ii  pve-kernel-4.4.44-1-pve  4.4.44-83
>>> > amd64The Proxmox PVE Kernel Image
>>> > root@b11:~# dpkg -l | grep quadstor
>>> > ii  quadstor-virt3.2.11
>>> > amd64QUADStor storage virtualization enterprise edition
>>> >
>>> > pve-kernel and headers are based on Ubuntu-4.4.0-63.84 packages
>>> >
>>> > build log follows...
>>> >
>>> > + [ ! -f /quadstor/lib/modules/corelib.o ]
>>> > + uname
>>> > + os=Linux
>>> > + cd /quadstor/src/export
>>> > + make clean
>>> > rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d
>>> > Module.*
>>> > + make
>>> > cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>>> > make -C /lib/modules/4.4.44-1-pve/build SUBDIRS=/quadstor/src/export
>>> > modules
>>> > make[1]: Entering directory '/usr/src/linux-headers-4.4.44-1-pve'
>>> >  CC [M]  /quadstor/src/export/ldev_linux.o
>>> >  CC [M]  /quadstor/src/export/devq.o
>>> >  LD [M]  /quadstor/src/export/ldev.o
>>> >  CC [M]  /quadstor/src/export/core_linux.o
>>> > In file included from include/scsi/scsi_cmnd.h:10:0,
>>> > from /quadstor/src/export/core_linux.c:26:
>>> > include/scsi/scsi_device.h:223:3: warning: ‘printk’ is an unrecognized
>>> > format function type [-Wformat=]
>>> >   const char *, ...);
>>> >   ^
>>> > include/scsi/scsi_device.h:229:40: warning: ‘printk’ is an unrecognized
>>> > format function type [-Wformat=]
>>> > scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
>>> >^
>>> > In file included from include/uapi/linux/in.h:23:0,
>>> > from include/linux/in.h:23,
>>> > from /quadstor/src/export/linuxdefs.h:18,
>>> > from /quadstor/src/export/core_linux.c:19:
>>> > /quadstor/src/export/core_linux.c: In function ‘sys_sock_create’:
>>> > include/linux/socket.h:163:18: warning: passing argument 1 of
>>> > ‘sock_create_kern’ makes pointer from integer without a cast
>>> > #define AF_INET  2 /* Internet IP Protocol  */
>>> >  ^
>>> > /quadstor/src/export/core_linux.c:174:28: note: in expansion of macro
>>> > ‘AF_INET’
>>> >  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP,
>>> > &sys_sock->sock);
>>> >^
>>> > In file included from include/linux/skbuff.h:29:0,
>>> > from include/linux/tcp.h:21,
>>> > from /quadstor/src/export/linuxdefs.h:19,
>>> > from /quadstor/src/export/core_linux.c:19:
>>> > include/linux/net.h:216:5: note: expected ‘struct net *’ but argument is
>>> > of
>>> > type ‘int’
>>> > int sock_create_kern(struct net *net, int family, int type, int proto,
>>> > struct socket **res);
>>> > ^
>>> > /quadstor/src/export/core_linux.c:174:63: warning: passing argument 4 o

Re: [quadstor-virt] vdisk availability issue

2017-04-11 Thread QUADStor Support
Can you send the diagnostics logs (HTML UI -> System -> Run
Diagnostics -> Submit) to supp...@quadstor.com

On Tue, Apr 11, 2017 at 5:16 PM, Artem Shvidky  wrote:
> Some issue with vdisk. Yestarday host loose the disk connection(host connect
> to quadstor via FC).
>
> vDisk information available in system(via gui and via cli too):
>
> -
> Name   PoolSerial NumberSize(GB) LUN   Status
> backupdisk Default 6e739d5a30881f0793f06fb02270a357 278891 D E
>
>  /quadstor/bin/vdconfig -l -v backupdisk -g Default
>Name: backupdisk
>Pool: Default
>Size: 27889
>   Threshold: 0
>   Deduplication: Yes
> Compression: No
> Verfication: No
>  Write Size: 15.439 TB
>   Write Ops: 18996376
>   Read Size: 6.96 GB
>Read Ops: 274976
>  Unaligned Size: 95.50 MB
>Data Deduped: 3.233 TB
>Dedupe Ratio: 1.265
>   Data Unmapped: 0.00 KB
>   Unmap Ops: 0
>   Blocks Zeroed: 0.00 KB
>   Uncompressed Size: 12.206 TB
> Compressed Size: 0.00 KB
>Compression Hits: 0.00 KB
>  Compression Misses: 0.00 KB
> Verify Hits: 0.00 KB
>   Verify Misses: 0.00 KB
>   Verify Errors: 0.00 KB
> CW Hits: 0
>   CW Misses: 0
> XCopy Write: 0.00 KB
>   XCopy Ops: 0
> Write Same Size: 0.00 KB
>  Write Same Ops: 0
> Populate Token Size: 0.00 KB
>  Populate Token Ops: 0
>Write Token Size: 0.00 KB
> Write Token Ops: 0
> 
>
> In quadstor.log i have: Tue Apr 11 10:44:54 2017 Err: Reading serial number
> failed for /dev/sdd1
> In message log i have:
>
> --
> Apr 11 10:33:34 quadstore kernel: __kern_exit:812 rcache exit
> Apr 11 10:33:34 quadstore kernel: __kern_exit:815 groups free
> Apr 11 10:33:34 quadstore kernel: __kern_exit:818 clear fc rules
> Apr 11 10:33:34 quadstore kernel: __kern_exit:821 end
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
> Apr 11 10:33:35 quadstore kernel: __kern_exit:727 kern_inited; 0
> Apr 11 10:33:36 quadstore mdaemon: ERROR: disk_getsize:87 Unable to open
> device /dev/sdc
> Apr 11 10:33:40 quadstore kernel: ddtable_global_update_peer_count:240 max
> ddtables 6 max ddlookup_count 3626800 max roots 524288 peer count 6
> free_threshold 3659568 crit threshold 3692336 threshold 32768
> Apr 11 10:44:42 quadstore kernel: dd_load_thread:1387 ddtable load for group
> Default took 660046 msecs
> Apr 11 10:44:44 quadstore kernel: scsi10 : QUADStor ldev
> Apr 11 10:44:44 quadstore kernel: scsi 10:0:0:0: Direct-Access QUADSTOR
> VDISK2.0  PQ: 0 ANSI: 6
> Apr 11 10:44:44 quadstore kernel: sd 10:0:0:0: Attached scsi generic sg4
> type 0
> Apr 11 10:44:44 quadstore kernel: sd 10:0:0:0: [sdd] 58487472128 512-byte
> logical blocks: (29.9 TB/27.2 TiB)
> Apr 11 10:44:44 quadstore kernel: sd 10:0:0:0: [sdd] Write Protect is off
> Apr 11 10:44:44 quadstore kernel: sd 10:0:0:0: [sdd] Write cache: enabled,
> read cache: enabled, supports DPO and FUA
> Apr 11 10:44:44 quadstore kernel: sdd: sdd1 sdd2
> Apr 11 10:44:44 quadstore kernel: sd 10:0:0:0: [sdd] Attached SCSI disk
> Apr 11 10:44:54 quadstore mdaemon: ERR: tl_server_dev_mapping:2150 Reading
> serial number failed for /dev/sdd1
> Apr 11 10:44:54 quadstore mdaemon: ERR: tl_server_dev_mapping:2150 Reading
> serial number failed for /dev/sdd2
> 
>
> what i can do to recovery system?
>
> 

Re: [quadstor-virt] vdisk availability issue

2017-04-11 Thread QUADStor Support
Which OS is the host running ? Anything in the host logs which can
tell why the FC connection was lost ?

The warnings in the kernel logs seem ok and some of them are not
really warnings and should be removed.

After the restart, at around Apr 11 13:55:11 the VDisk should be
accessible again over FC. Try a host rescan and let us know if there
are any errors in the host logs.

On Tue, Apr 11, 2017 at 6:26 PM, Artem Shvidky  wrote:
>
>
> вторник, 11 апреля 2017 г., 15:04:48 UTC+3 пользователь quadstor написал:
>>
>> Can you send the diagnostics logs (HTML UI -> System -> Run
>> Diagnostics -> Submit) to sup...@quadstor.com
>>
>> On Tue, Apr 11, 2017 at 5:16 PM, Artem Shvidky  wrote:
>> > Some issue with vdisk. Yestarday host loose the disk connection(host
>> > connect
>> > to quadstor via FC).
>> >
>> > vDisk information available in system(via gui and via cli too):
>> >
>> >
>> > -
>> > Name   PoolSerial NumberSize(GB) LUN
>> > Status
>> > backupdisk Default 6e739d5a30881f0793f06fb02270a357 278891 D E
>> >
>> >  /quadstor/bin/vdconfig -l -v backupdisk -g Default
>> >Name: backupdisk
>> >Pool: Default
>> >Size: 27889
>> >   Threshold: 0
>> >   Deduplication: Yes
>> > Compression: No
>> > Verfication: No
>> >  Write Size: 15.439 TB
>> >   Write Ops: 18996376
>> >   Read Size: 6.96 GB
>> >Read Ops: 274976
>> >  Unaligned Size: 95.50 MB
>> >Data Deduped: 3.233 TB
>> >Dedupe Ratio: 1.265
>> >   Data Unmapped: 0.00 KB
>> >   Unmap Ops: 0
>> >   Blocks Zeroed: 0.00 KB
>> >   Uncompressed Size: 12.206 TB
>> > Compressed Size: 0.00 KB
>> >Compression Hits: 0.00 KB
>> >  Compression Misses: 0.00 KB
>> > Verify Hits: 0.00 KB
>> >   Verify Misses: 0.00 KB
>> >   Verify Errors: 0.00 KB
>> > CW Hits: 0
>> >   CW Misses: 0
>> > XCopy Write: 0.00 KB
>> >   XCopy Ops: 0
>> > Write Same Size: 0.00 KB
>> >  Write Same Ops: 0
>> > Populate Token Size: 0.00 KB
>> >  Populate Token Ops: 0
>> >Write Token Size: 0.00 KB
>> > Write Token Ops: 0
>> >
>> > 
>> >
>> > In quadstor.log i have: Tue Apr 11 10:44:54 2017 Err: Reading serial
>> > number
>> > failed for /dev/sdd1
>> > In message log i have:
>> >
>> >
>> > --
>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:812 rcache exit
>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:815 groups free
>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:818 clear fc rules
>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:821 end
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>> > cmd
>> > TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> > Apr 11 10:33:35 quadstore kernel: __kern_exit:727 kern_inited; 0
>> > Apr 11 10:33:36 quadstore mdaemon: ERROR: disk_getsize:87 Unable to open
>> > device /dev/sdc
>> > Apr 11 10:33:40 quadstore kernel: ddtable_global_update_peer_count:240
>> > max
>> > ddtables 6 max ddlookup_count 3626800 max roots 524288 peer count 6
>> > free_threshold 3659568 crit threshold 3692336 threshold 32768
>> > Apr 11 10:44:42 quadstore kernel: dd_load_thread:1387 ddtable load for
>> > group
>> > Default took 660046 msecs
>> > Apr 11 10:44:44 quadstore kernel: scsi10 : QUADStor ldev
>> > Apr 11 10:44:44 quadstore kernel: scsi 10:0:0:0: Direct-Access
>> > QUADSTOR
>> > VDISK2.0  PQ: 0 ANSI: 6
>> > Apr 11 10:44:44 quadstore kernel: sd 10:0:0

Re: [quadstor-virt] vdisk availability issue

2017-04-11 Thread QUADStor Support
One issue for the connectivity loss is that the firmware on the card
is out dated. We see the following in the logs

Apr 11 13:26:21 quadstore kernel: qla2xxx [:05:00.1]-0063:6:
Failed to load firmware image (ql2400_fw.bin).
Apr 11 13:26:21 quadstore kernel: qla2xxx [:05:00.1]-0090:6:
Firmware image unavailable.
Apr 11 13:26:21 quadstore kernel: qla2xxx [:05:00.1]-0091:6:
Firmware images can be retrieved from:
http://ldriver.qlogic.com/firmware/

You can get a more recent firmware by installing the firmware-qlogic
package (non-free)

Or you can get ql2400_fw.bin from the QLogic site in the link
mentioned above and place is under /lib/firmware/.

Then run /quadstor/bin/qlainst to recreate the initrd image.



On Tue, Apr 11, 2017 at 11:32 PM, QUADStor Support  wrote:
> Which OS is the host running ? Anything in the host logs which can
> tell why the FC connection was lost ?
>
> The warnings in the kernel logs seem ok and some of them are not
> really warnings and should be removed.
>
> After the restart, at around Apr 11 13:55:11 the VDisk should be
> accessible again over FC. Try a host rescan and let us know if there
> are any errors in the host logs.
>
> On Tue, Apr 11, 2017 at 6:26 PM, Artem Shvidky  wrote:
>>
>>
>> вторник, 11 апреля 2017 г., 15:04:48 UTC+3 пользователь quadstor написал:
>>>
>>> Can you send the diagnostics logs (HTML UI -> System -> Run
>>> Diagnostics -> Submit) to sup...@quadstor.com
>>>
>>> On Tue, Apr 11, 2017 at 5:16 PM, Artem Shvidky  wrote:
>>> > Some issue with vdisk. Yestarday host loose the disk connection(host
>>> > connect
>>> > to quadstor via FC).
>>> >
>>> > vDisk information available in system(via gui and via cli too):
>>> >
>>> >
>>> > -
>>> > Name   PoolSerial NumberSize(GB) LUN
>>> > Status
>>> > backupdisk Default 6e739d5a30881f0793f06fb02270a357 278891 D E
>>> >
>>> >  /quadstor/bin/vdconfig -l -v backupdisk -g Default
>>> >Name: backupdisk
>>> >Pool: Default
>>> >Size: 27889
>>> >   Threshold: 0
>>> >   Deduplication: Yes
>>> > Compression: No
>>> > Verfication: No
>>> >  Write Size: 15.439 TB
>>> >   Write Ops: 18996376
>>> >   Read Size: 6.96 GB
>>> >Read Ops: 274976
>>> >  Unaligned Size: 95.50 MB
>>> >Data Deduped: 3.233 TB
>>> >Dedupe Ratio: 1.265
>>> >   Data Unmapped: 0.00 KB
>>> >   Unmap Ops: 0
>>> >   Blocks Zeroed: 0.00 KB
>>> >   Uncompressed Size: 12.206 TB
>>> > Compressed Size: 0.00 KB
>>> >Compression Hits: 0.00 KB
>>> >  Compression Misses: 0.00 KB
>>> > Verify Hits: 0.00 KB
>>> >   Verify Misses: 0.00 KB
>>> >   Verify Errors: 0.00 KB
>>> > CW Hits: 0
>>> >   CW Misses: 0
>>> > XCopy Write: 0.00 KB
>>> >   XCopy Ops: 0
>>> > Write Same Size: 0.00 KB
>>> >  Write Same Ops: 0
>>> > Populate Token Size: 0.00 KB
>>> >  Populate Token Ops: 0
>>> >Write Token Size: 0.00 KB
>>> > Write Token Ops: 0
>>> >
>>> > 
>>> >
>>> > In quadstor.log i have: Tue Apr 11 10:44:54 2017 Err: Reading serial
>>> > number
>>> > failed for /dev/sdd1
>>> > In message log i have:
>>> >
>>> >
>>> > --
>>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:812 rcache exit
>>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:815 groups free
>>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:818 clear fc rules
>>> > Apr 11 10:33:34 quadstore kernel: __kern_exit:821 end
>>> > Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect
>>> > cmd
>>> > TLTARGIOCTDISKSTATS errno 1 Operat

Re: [quadstor-virt] Non-existent VD/HDD removal

2017-04-11 Thread QUADStor Support
You should be able to delete them from the database and restart the service
and these should not longer appear

See the attached script for querying the database.
Also after the 'exit 0' statements are the relevant sql statements to
delete the disk, storage pool and vdisk.

The storage pool and vdisk can be deleted by name. The disk can be
identified by the ID seen in the GUI. In this case its 3


On Tue, Apr 11, 2017 at 3:18 PM, Dmitry Polezhaev  wrote:

> After multiple HDD failure I got the state, which I'm not able to clean:
> VD with 'offline' status, pool with no HDD and offline HDD in a database.
> How to clean that? Suppose the physical storage will not be available
> anymore and the task is:
> - Remove VD02 and release pool VP02;
> - Remove VP02;
> - Remove HDD #3.
>
> The VDisks web-form has no delete option.
> The CLI command './vdconfig -x -v VD02 -f ' reports ' Deleting VDisk VD02
> failed '.
> The reboot makes no change.
> The CLI command ' ./bdconfig -l ' reports operating drives
>
>> ID  Vendor ModelSerialNumber NamePool
>> Size Used Status
>> 1   HP Smart Array  GEN001625544471  /dev/cciss/c0d0 Default
>> 610.53   447.94   D
>> 5   RAID   raid5GEN001678510747  /dev/md2VP03
>> 609.97   1.03
>> 2   RAID   raid5GEN001891589836  /dev/md1VP01
>> 819.65   383.27
>
> The CLI ' bdconfig -x -d  ' command requires the 'devicepath'
> knowledge, but is this possible for non-existing device?
>
> Does the ' remove HDD by ID ' command exist?
>
>
> 
>
> 
>
> 
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


deldisk.sh
Description: Bourne shell script


Re: [quadstor-virt] Re: vdisk availability issue

2017-04-12 Thread QUADStor Support
What OS/distribution is the initiator running on ?

On Wed, Apr 12, 2017 at 2:53 PM, Artem Shvidky  wrote:
> Hi!
>
> I'm recreate initrd via /quadstor/bin/builditf, load drivers in firmware
> directory restart system, but have the same error in logs:
The firmware rpm is installed correctly, but it should have been
installed before running /quadstor/bin/builditf
We still see the following error
Apr 12 02:14:43 quadstore kernel: qla2xxx [:05:00.1]-0063:6:
Failed to load firmware image (ql2400_fw.bin).

You could also simple uninstall and reinstall the RPM and that should
take care of things
rpm -e `rpm -qa | grep quadstor-virt`
rpm -i 


>
> tl_server_dev_mapping:2150 Reading serial number failed for /dev/sdd1
>
> i can't find some information about serial number, what serial system try to
> reading?
This is fine and can be ignored. /dev/sdd is infact the VDISK which is
correct up and online

>
> After rescan disk not be visible too. I'm attach new diagnostic logs.
That seems to be because of the following
Apr 12 02:26:04 quadstore mdaemon: ERR: tl_server_dev_mapping:2150
Reading serial number failed for /dev/sdd2
Apr 12 03:07:11 quadstore mdaemon: ERR: server_init:4665 Unable to
bind to mdaemon port errno 98 Address already in use
Apr 12 03:07:15 quadstore mdaemon: ERR: server_init:4665 Unable to
bind to mdaemon port errno 98 Address already in use

Seems like you are trying to start the daemon manually ? Thd disk is
visible in the logs and is online.

>
> вторник, 11 апреля 2017 г., 15:03:26 UTC+3 пользователь Artem Shvidky
> написал:
>>
>> Some issue with vdisk. Yestarday host loose the disk connection(host
>> connect to quadstor via FC).
>>
>> vDisk information available in system(via gui and via cli too):
>>
>>
>> -
>> Name   PoolSerial NumberSize(GB) LUN   Status
>> backupdisk Default 6e739d5a30881f0793f06fb02270a357 278891 D E
>>
>>  /quadstor/bin/vdconfig -l -v backupdisk -g Default
>>Name: backupdisk
>>Pool: Default
>>Size: 27889
>>   Threshold: 0
>>   Deduplication: Yes
>> Compression: No
>> Verfication: No
>>  Write Size: 15.439 TB
>>   Write Ops: 18996376
>>   Read Size: 6.96 GB
>>Read Ops: 274976
>>  Unaligned Size: 95.50 MB
>>Data Deduped: 3.233 TB
>>Dedupe Ratio: 1.265
>>   Data Unmapped: 0.00 KB
>>   Unmap Ops: 0
>>   Blocks Zeroed: 0.00 KB
>>   Uncompressed Size: 12.206 TB
>> Compressed Size: 0.00 KB
>>Compression Hits: 0.00 KB
>>  Compression Misses: 0.00 KB
>> Verify Hits: 0.00 KB
>>   Verify Misses: 0.00 KB
>>   Verify Errors: 0.00 KB
>> CW Hits: 0
>>   CW Misses: 0
>> XCopy Write: 0.00 KB
>>   XCopy Ops: 0
>> Write Same Size: 0.00 KB
>>  Write Same Ops: 0
>> Populate Token Size: 0.00 KB
>>  Populate Token Ops: 0
>>Write Token Size: 0.00 KB
>> Write Token Ops: 0
>>
>> 
>>
>> In quadstor.log i have: Tue Apr 11 10:44:54 2017 Err: Reading serial
>> number failed for /dev/sdd1
>> In message log i have:
>>
>>
>> --
>> Apr 11 10:33:34 quadstore kernel: __kern_exit:812 rcache exit
>> Apr 11 10:33:34 quadstore kernel: __kern_exit:815 groups free
>> Apr 11 10:33:34 quadstore kernel: __kern_exit:818 clear fc rules
>> Apr 11 10:33:34 quadstore kernel: __kern_exit:821 end
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:34 quadstore mdaemon: WARN: tl_ioctl2:152 failed to exect cmd
>> TLTARGIOCTDISKSTATS errno 1 Operation not permitted
>> Apr 11 10:33:35 quadstore kernel: __kern_exit:727 kern_inited; 0

Re: [quadstor-virt] vdisk availability issue

2017-04-12 Thread QUADStor Support
builditf is correct. qlainst does not do the job correctly. qlainst is
called by builditf

Simplest way will be uninstall and install the quadstor package.

A reboot is required after the install/builditf

On Wed, Apr 12, 2017 at 2:58 PM, Dmitry Polezhaev  wrote:
> On Tuesday, April 11, 2017 at 9:25:21 PM UTC+3, quadstor wrote:
>>
>> Then run /quadstor/bin/qlainst to recreate the initrd image.
>
>
> On Debian Jessie the procedure (I decided to upgrade firmwares as well)
> appeared not so easy:
> root@labstor01:/quadstor/bin# ./qlainst
> qla2xxx driver has not been built for kernel version 3.16.0-4-amd64
> Build the itf package by running /quadstor/bin/builditf first
>
> root@labstor01:/quadstor/bin# ./builditf
> ... A lot of build messages ...
> Already saved original qla2xxx.ko driver
> Recreating initrd image
> update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64
>
> root@labstor01:/quadstor/bin# ./qlainst
> qla2xxx driver has not been built for kernel version 3.16.0-4-amd64
> Build the itf package by running /quadstor/bin/builditf first
>
> So, seems either builditf in vain, or qlainst is not applicable to Debian.
> Is that OK?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Non-existent VD/HDD removal

2017-04-12 Thread QUADStor Support
Should have worked.
You can try the following

login as root
cd /quadstor/pgsql/bin
su scdbuser
./psql qsdb

Now in the psql shell you can run the sql commands directly

For example
select * from physstor;
select * from tdisk;

Or to delete the disk, vdisk etc.

delete from physstor where bid=3;
delete from tdisk where name=VD02;
delete from storagegroup where name=VP02;

ctrl+d
service quadstor restart



On Wed, Apr 12, 2017 at 3:19 PM, Dmitry Polezhaev  wrote:
> On Tuesday, April 11, 2017 at 9:52:17 PM UTC+3, quadstor wrote:
>>
>> See the attached script for querying the database.
>
>
> Seems the script has wrong references to user, which is allowed to edit
> database...
> root@labstor01:/tmp# ./deldisk.sh
> psql: FATAL:  role "root" does not exist
> psql: FATAL:  role "root" does not exist
> Broken pipe
> psql: FATAL:  role "root" does not exist
> Broken pipe
>
> The runuser folder exists, the password for runuser is empty
> root@labstor01:/tmp# ls /sbin/runuser
> /sbin/runuser
> root@labstor01:/tmp# su runuser
> No passwd entry for user 'runuser'
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Non-existent VD/HDD removal

2017-04-12 Thread QUADStor Support
You need to end the commands with a semicolon

delete from physstor where bid=3;
delete from tdisk where name=VD02;
delete from storagegroup where name=VP02;

Also the second statement has a typo
detele from tdisk where name=VD02

It should be delete and not detele

On Wed, Apr 12, 2017 at 8:03 PM, Dmitry Polezhaev  wrote:
> The 'manual approach' techically works, but there appeared no desired
> result: still the same VD, pool and disk after service restart and even
> system reboot.
> Here is the transcript.
> root@labstor01:~# cd /quadstor/pgsql/bin/
> root@labstor01:/quadstor/pgsql/bin# su scdbuser
> $ ./psql qsdb
> psql (8.4.1)
> Type "help" for help.
> qsdb-# delete from physstor where bid=3
> qsdb-# detele from tdisk where name=VD02
> qsdb-# delete from storagegroup where name=VP02
> qsdb-# \q
> $ exit
> root@labstor01:/quadstor/pgsql/bin# service quadstor restart
> root@labstor01:/quadstor/pgsql/bin# reboot
>
> The only change noticed: VD02 status changed to 'deleting', but no actual
> activity is observing.
> root@labstor01:/quadstor/bin# ./vdconfig -l
> Name PoolSerial NumberSize(GB) LUN   Status
> VD00 Default 6e214d3dcfcd8e1fb71a992f42ef78dd 605  1 E
> VD01 VP016e1b3965130abebe8ab2576cbec914e3 818  2 E
> VD02 VP02Unknown  338  3 Deletion in
> progress
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Managing Virtual Disk Size

2017-04-13 Thread QUADStor Support
The filesystem needs to be resized to use the new size of the VDisk.

For example for ext4 refer https://access.redhat.com/articles/1196353

On Thu, Apr 13, 2017 at 2:12 PM, Gary Eastwood  wrote:
> Currently have a 10TB vDisk called vDisk1 that I want to extend to 15TB.
>
> If I follow documentation here
> http://www.quadstor.com/support/120-creating-and-managing-virtual-disks.html
>
> and run /quadstor/bin/vdconfig -v vDisk1 -s 15360 the vDisk says that it's
> been extended in the GUI, however if I SSH to the server and run df -h then
> I can see that it's still only 10TB.
>
> Have rebooted the server too incase it needed one but still no joy. Can
> anyone point me in the right direction?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: Building kernel modules failed!

2017-04-19 Thread QUADStor Support
The build error should now be fixed. The software can be built against
4.9.x kernels

The next release can be downloaded from
http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html

On Tue, Apr 11, 2017 at 2:31 PM, Dmitry Polezhaev  wrote:
> I'm using Debian Jessie also, but without PROXMOX. And my versions are
> completely older:
>>
>> # cat /etc/debian_version
>> 8.7
>> # uname -a
>> Linux labstor01.lab.local 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u2
>> (2017-03-07) x86_64 GNU/Linux
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] how to check qsync state

2017-04-19 Thread QUADStor Support
The next release (3.2.12) is now available from
http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html

The sync status as a percentage is now displayed.

On Sat, Mar 18, 2017 at 3:22 AM, Paul Reid  wrote:
> That's great news! We'll be eagerly watching for that update. Thanks!
>
> On Friday, March 17, 2017 at 2:49:45 PM UTC-7, quadstor wrote:
>>
>> The sync status as a percentage will be added in the next release (in
>> a week or two)
>>
>> On Wed, Mar 8, 2017 at 1:27 AM, Paul Reid 
>> wrote:
>> > I know this is an ancient thread, but I'm still seeing the same issue on
>> > 3.2.9. Did this ever get sorted out? It'd be nice to know what
>> > percentage
>> > has been synchronized, so you can get some indication of whether it's
>> > going
>> > or not, and when it might be complete.
>> >
>> > Thanks!
>> >
>> > On Monday, September 2, 2013 at 11:28:56 AM UTC-7, quadstor wrote:
>> >>
>> >> On 9/2/13, Mac Linux  wrote:
>> >> > hello,
>> >> >
>> >> > I just updated my 2 node cluster to 3.0.46 and on one vdisk states
>> >> > "resync
>> >> > needed" on the command qsync -l.
>> >> >
>> >> > How can I check if it is ready or the percentage ?
>> >>
>> >> Usually the VDisk in the master role indicates whether sync has
>> >> started (Resyncing). However currently the status information is
>> >> inconvenient as very little detail of the progress is shown. Either
>> >> the states are Resync needed, Resyncing or Enabled ( Enabled when
>> >> resync completed successfully and disabled for any failure)
>> >>
>> >> We will be improving this in the coming weeks.
>> >>
>> >> > thanks
>> >> > mac
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "QUADStor Storage Virtualization" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to quadstor-vir...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/groups/opt_out.
>> >> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: IBM AIX Compatibility or Emulation

2017-04-30 Thread QUADStor Support
we would need more information than this

errpt sense information will be useful.

Also the new release now supports the following option in
/quadstor/etc/quadstor.conf
CmdDebug=1

This should print SCSI sense errors if any in the kernel logs
(/var/log/messages or /var/log/kern.log) along with the CDB and
parameter data information

On Thu, Apr 13, 2017 at 6:02 PM, Dmitry Polezhaev  wrote:
> You are right, lau, in AIX the visibility of LUN and Physical Volume
> existence does not mean the Volume Group creation possibility. Our AIX
> Engineer came into the same issue. He also found the mapped LUNs
> (hdisk2,3,4) are not visible by the storage path manager:
> root@tsm64 /home/root# lsdev | grep hdisk
> hdisk0 Available Virtual SCSI Disk Drive
> hdisk1 Available Virtual SCSI Disk Drive
> hdisk2 Available 00-00-01Other FC SCSI Disk Drive
> hdisk3 Available 00-00-01Other FC SCSI Disk Drive
> hdisk7 Available 00-00-01MPIO Other FC SCSI Disk Drive
> hdisk4 Available 00-00-01Other FC SCSI Disk Drive
>
> root@tsm64 /home/root# lspath
> Enabled hdisk0 vscsi0
> Enabled hdisk1 vscsi0
> Defined hdisk7 fscsi0
> Defined hdisk7 fscsi0
> Enabled hdisk7 fscsi0
>
>
> As additional information: how the LUNs parameters are visible in AIX:
> root@tsm64 /home/root# lsattr -El hdisk2
> clr_q no Device CLEARS its Queue on error True
> location Location Label   True
> lun_id0x1Logical Unit Number ID   False
> max_transfer  0x4Maximum TRANSFER SizeTrue
> node_name 0x20e08b94f4dd FC Node Name False
> pvid  none   Physical volume identifier   False
> q_err yesUse QERR bit True
> q_typesimple Queuing TYPE True
> queue_depth   1  Queue DEPTH  True
> reassign_to   120REASSIGN time out value  True
> rw_timeout30 READ/WRITE time out valueTrue
> scsi_id   0xb0800SCSI ID  False
> start_timeout 60 START unit time out valueTrue
> ww_name   0x21e08b94f4dd FC World Wide Name   False
>
> root@tsm64 /home/root# lsattr -El hdisk3
> clr_q no Device CLEARS its Queue on error True
> location Location Label   True
> lun_id0x2Logical Unit Number ID   False
> max_transfer  0x4Maximum TRANSFER SizeTrue
> node_name 0x20e08b94f4dd FC Node Name False
> pvid  none   Physical volume identifier   False
> q_err yesUse QERR bit True
> q_typesimple Queuing TYPE True
> queue_depth   1  Queue DEPTH  True
> reassign_to   120REASSIGN time out value  True
> rw_timeout30 READ/WRITE time out valueTrue
> scsi_id   0xb0800SCSI ID  False
> start_timeout 60 START unit time out valueTrue
> ww_name   0x21e08b94f4dd FC World Wide Name   False
>
> root@tsm64 /home/root# lsattr -El hdisk4
> clr_q no Device CLEARS its Queue on error True
> location Location Label   True
> lun_id0x3Logical Unit Number ID   False
> max_transfer  0x4Maximum TRANSFER SizeTrue
> node_name 0x20e08b94f4dd FC Node Name False
> pvid  none   Physical volume identifier   False
> q_err yesUse QERR bit True
> q_typesimple Queuing TYPE True
> queue_depth   1  Queue DEPTH  True
> reassign_to   120REASSIGN time out value  True
> rw_timeout30 READ/WRITE time out valueTrue
> scsi_id   0xb0800SCSI ID  False
> start_timeout 60 START unit time out valueTrue
> ww_name   0x21e08b94f4dd FC World Wide Name   False
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop

Re: [quadstor-virt] Re: Building kernel modules failed!

2017-08-13 Thread QUADStor Support
There was a name clash with one of the kernel functions. Try
http://www.quadstor.com/virtentdub3z/quadstor-virt-3.2.12.2-debian7-x86_64.deb

On Fri, Aug 11, 2017 at 8:50 PM, Mark Syms  wrote:
> Doesn't appear to be fixed, on Debian 9 (stretch) with either the distro
> binary kernel and headers or a locally built 4.9.41 from the kernel.org git
> repo.
>
> Using quadstor-virt-3.2.12.1 the following occurs
>
> #> /quadstor/bin/builditf
> rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d Module.*
> cp /quadstor/lib/modules/corelib.o /quadstor/src/export
> make -C /lib/modules/4.9.41/build SUBDIRS=/quadstor/src/export modules
> make[1]: Entering directory '/usr/src/linux-headers-4.9.41'
>   CC [M]  /quadstor/src/export/ldev_linux.o
>   CC [M]  /quadstor/src/export/devq.o
>   LD [M]  /quadstor/src/export/ldev.o
>   CC [M]  /quadstor/src/export/core_linux.o
> /quadstor/src/export/core_linux.c:1172:1: error: static declaration of
> ‘bio_free_pages’ follows non-static decln
>  bio_free_pages(bio_t *bio)
>  ^~
> In file included from ./include/linux/blkdev.h:19:0,
>  from /quadstor/src/export/linuxdefs.h:5,
>  from /quadstor/src/export/core_linux.c:19:
> ./include/linux/bio.h:462:13: note: previous declaration of ‘bio_free_pages’
> was here
>  extern void bio_free_pages(struct bio *bio);
>  ^~
> scripts/Makefile.build:293: recipe for target
> '/quadstor/src/export/core_linux.o' failed
> make[2]: *** [/quadstor/src/export/core_linux.o] Error 1
> Makefile:1493: recipe for target '_module_/quadstor/src/export' failed
> make[1]: *** [_module_/quadstor/src/export] Error 2
> make[1]: Leaving directory '/usr/src/linux-headers-4.9.41'
> Makefile:28: recipe for target 'default' failed
> make: *** [default] Error 2
> ERROR: Building kernel modules failed!
>
>
> So, there appears to be a header mismatch somewhere.
>
> Thanks,
>
> On Wednesday, 19 April 2017 20:13:39 UTC+1, quadstor wrote:
>>
>> The build error should now be fixed. The software can be built against
>> 4.9.x kernels
>>
>> The next release can be downloaded from
>>
>> http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
>>
>> On Tue, Apr 11, 2017 at 2:31 PM, Dmitry Polezhaev  wrote:
>> > I'm using Debian Jessie also, but without PROXMOX. And my versions are
>> > completely older:
>> >>
>> >> # cat /etc/debian_version
>> >> 8.7
>> >> # uname -a
>> >> Linux labstor01.lab.local 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u2
>> >> (2017-03-07) x86_64 GNU/Linux
>> >
>> >
>> >
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: Building kernel modules failed!

2017-08-14 Thread QUADStor Support
The soft lockup needs to be looked into. Can you send us the diagnostics logs.


On Mon, Aug 14, 2017 at 2:51 PM, Mark Syms  wrote:
> That builds, I am seeing errors like this on startup though -
>
>> kernel:[  322.168862] NMI watchdog: BUG: soft lockup - CPU#24 stuck for
>> 23s! [ddloadt:10673]
>
>
> Hopefully this doesn't indicate a serious issue.
>
> On Sunday, 13 August 2017 18:42:59 UTC+1, quadstor wrote:
>>
>> There was a name clash with one of the kernel functions. Try
>>
>> http://www.quadstor.com/virtentdub3z/quadstor-virt-3.2.12.2-debian7-x86_64.deb
>>
>> On Fri, Aug 11, 2017 at 8:50 PM, Mark Syms  wrote:
>> > Doesn't appear to be fixed, on Debian 9 (stretch) with either the distro
>> > binary kernel and headers or a locally built 4.9.41 from the kernel.org
>> > git
>> > repo.
>> >
>> > Using quadstor-virt-3.2.12.1 the following occurs
>> >
>> > #> /quadstor/bin/builditf
>> > rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d
>> > Module.*
>> > cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>> > make -C /lib/modules/4.9.41/build SUBDIRS=/quadstor/src/export modules
>> > make[1]: Entering directory '/usr/src/linux-headers-4.9.41'
>> >   CC [M]  /quadstor/src/export/ldev_linux.o
>> >   CC [M]  /quadstor/src/export/devq.o
>> >   LD [M]  /quadstor/src/export/ldev.o
>> >   CC [M]  /quadstor/src/export/core_linux.o
>> > /quadstor/src/export/core_linux.c:1172:1: error: static declaration of
>> > ‘bio_free_pages’ follows non-static decln
>> >  bio_free_pages(bio_t *bio)
>> >  ^~
>> > In file included from ./include/linux/blkdev.h:19:0,
>> >  from /quadstor/src/export/linuxdefs.h:5,
>> >  from /quadstor/src/export/core_linux.c:19:
>> > ./include/linux/bio.h:462:13: note: previous declaration of
>> > ‘bio_free_pages’
>> > was here
>> >  extern void bio_free_pages(struct bio *bio);
>> >  ^~
>> > scripts/Makefile.build:293: recipe for target
>> > '/quadstor/src/export/core_linux.o' failed
>> > make[2]: *** [/quadstor/src/export/core_linux.o] Error 1
>> > Makefile:1493: recipe for target '_module_/quadstor/src/export' failed
>> > make[1]: *** [_module_/quadstor/src/export] Error 2
>> > make[1]: Leaving directory '/usr/src/linux-headers-4.9.41'
>> > Makefile:28: recipe for target 'default' failed
>> > make: *** [default] Error 2
>> > ERROR: Building kernel modules failed!
>> >
>> >
>> > So, there appears to be a header mismatch somewhere.
>> >
>> > Thanks,
>> >
>> > On Wednesday, 19 April 2017 20:13:39 UTC+1, quadstor wrote:
>> >>
>> >> The build error should now be fixed. The software can be built against
>> >> 4.9.x kernels
>> >>
>> >> The next release can be downloaded from
>> >>
>> >>
>> >> http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
>> >>
>> >> On Tue, Apr 11, 2017 at 2:31 PM, Dmitry Polezhaev  wrote:
>> >> > I'm using Debian Jessie also, but without PROXMOX. And my versions
>> >> > are
>> >> > completely older:
>> >> >>
>> >> >> # cat /etc/debian_version
>> >> >> 8.7
>> >> >> # uname -a
>> >> >> Linux labstor01.lab.local 3.16.0-4-amd64 #1 SMP Debian
>> >> >> 3.16.39-1+deb8u2
>> >> >> (2017-03-07) x86_64 GNU/Linux
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "QUADStor Storage Virtualization" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to quadstor-vir...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: Building kernel modules failed!

2017-08-14 Thread QUADStor Support
Thanks. Do you remember the previous version that was installed ?

On Mon, Aug 14, 2017 at 3:53 PM, Mark Syms  wrote:
> Attached.
>
> On Monday, 14 August 2017 11:14:11 UTC+1, quadstor wrote:
>>
>> The soft lockup needs to be looked into. Can you send us the diagnostics
>> logs.
>>
>>
>> On Mon, Aug 14, 2017 at 2:51 PM, Mark Syms  wrote:
>> > That builds, I am seeing errors like this on startup though -
>> >
>> >> kernel:[  322.168862] NMI watchdog: BUG: soft lockup - CPU#24 stuck for
>> >> 23s! [ddloadt:10673]
>> >
>> >
>> > Hopefully this doesn't indicate a serious issue.
>> >
>> > On Sunday, 13 August 2017 18:42:59 UTC+1, quadstor wrote:
>> >>
>> >> There was a name clash with one of the kernel functions. Try
>> >>
>> >>
>> >> http://www.quadstor.com/virtentdub3z/quadstor-virt-3.2.12.2-debian7-x86_64.deb
>> >>
>> >> On Fri, Aug 11, 2017 at 8:50 PM, Mark Syms  wrote:
>> >> > Doesn't appear to be fixed, on Debian 9 (stretch) with either the
>> >> > distro
>> >> > binary kernel and headers or a locally built 4.9.41 from the
>> >> > kernel.org
>> >> > git
>> >> > repo.
>> >> >
>> >> > Using quadstor-virt-3.2.12.1 the following occurs
>> >> >
>> >> > #> /quadstor/bin/builditf
>> >> > rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d
>> >> > Module.*
>> >> > cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>> >> > make -C /lib/modules/4.9.41/build SUBDIRS=/quadstor/src/export
>> >> > modules
>> >> > make[1]: Entering directory '/usr/src/linux-headers-4.9.41'
>> >> >   CC [M]  /quadstor/src/export/ldev_linux.o
>> >> >   CC [M]  /quadstor/src/export/devq.o
>> >> >   LD [M]  /quadstor/src/export/ldev.o
>> >> >   CC [M]  /quadstor/src/export/core_linux.o
>> >> > /quadstor/src/export/core_linux.c:1172:1: error: static declaration
>> >> > of
>> >> > ‘bio_free_pages’ follows non-static decln
>> >> >  bio_free_pages(bio_t *bio)
>> >> >  ^~
>> >> > In file included from ./include/linux/blkdev.h:19:0,
>> >> >  from /quadstor/src/export/linuxdefs.h:5,
>> >> >  from /quadstor/src/export/core_linux.c:19:
>> >> > ./include/linux/bio.h:462:13: note: previous declaration of
>> >> > ‘bio_free_pages’
>> >> > was here
>> >> >  extern void bio_free_pages(struct bio *bio);
>> >> >  ^~
>> >> > scripts/Makefile.build:293: recipe for target
>> >> > '/quadstor/src/export/core_linux.o' failed
>> >> > make[2]: *** [/quadstor/src/export/core_linux.o] Error 1
>> >> > Makefile:1493: recipe for target '_module_/quadstor/src/export'
>> >> > failed
>> >> > make[1]: *** [_module_/quadstor/src/export] Error 2
>> >> > make[1]: Leaving directory '/usr/src/linux-headers-4.9.41'
>> >> > Makefile:28: recipe for target 'default' failed
>> >> > make: *** [default] Error 2
>> >> > ERROR: Building kernel modules failed!
>> >> >
>> >> >
>> >> > So, there appears to be a header mismatch somewhere.
>> >> >
>> >> > Thanks,
>> >> >
>> >> > On Wednesday, 19 April 2017 20:13:39 UTC+1, quadstor wrote:
>> >> >>
>> >> >> The build error should now be fixed. The software can be built
>> >> >> against
>> >> >> 4.9.x kernels
>> >> >>
>> >> >> The next release can be downloaded from
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
>> >> >>
>> >> >> On Tue, Apr 11, 2017 at 2:31 PM, Dmitry Polezhaev 
>> >> >> wrote:
>> >> >> > I'm using Debian Jessie also, but without PROXMOX. And my versions
>> >> >> > are
>> >> >> > completely older:
>> >> >> >>
>> >> >> >> # cat /etc/debian_version
>> >> >> >> 8.7
>> >> >> >> # uname -a
>> >> >> >> Linux labstor01.lab.local 3.16.0-4-amd64 #1 SMP Debian
>> >> >> >> 3.16.39-1+deb8u2
>> >> >> >> (2017-03-07) x86_64 GNU/Linux
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > You received this message because you are subscribed to the Google
>> >> >> > Groups
>> >> >> > "QUADStor Storage Virtualization" group.
>> >> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> >> > send
>> >> >> > an
>> >> >> > email to quadstor-vir...@googlegroups.com.
>> >> >> > For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "QUADStor Storage Virtualization" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to quadstor-vir...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualiza

Re: [quadstor-virt] Re: Building kernel modules failed!

2017-08-15 Thread QUADStor Support
Seems like a false positive. The soft lock is reported for a crc
function which calculates the crc for the metadata stored on disk.
This function can be called for many blocks on startup. Also the
thread which does the metadata load does complete.
Its seeks ok for now, but we will look into this further.


On Mon, Aug 14, 2017 at 6:18 PM, Mark Syms  wrote:
> 3.2.12.1.
>
> HTH,
>
> Mark.
>
> On Monday, 14 August 2017 13:46:17 UTC+1, quadstor wrote:
>>
>> Thanks. Do you remember the previous version that was installed ?
>>
>> On Mon, Aug 14, 2017 at 3:53 PM, Mark Syms  wrote:
>> > Attached.
>> >
>> > On Monday, 14 August 2017 11:14:11 UTC+1, quadstor wrote:
>> >>
>> >> The soft lockup needs to be looked into. Can you send us the
>> >> diagnostics
>> >> logs.
>> >>
>> >>
>> >> On Mon, Aug 14, 2017 at 2:51 PM, Mark Syms  wrote:
>> >> > That builds, I am seeing errors like this on startup though -
>> >> >
>> >> >> kernel:[  322.168862] NMI watchdog: BUG: soft lockup - CPU#24 stuck
>> >> >> for
>> >> >> 23s! [ddloadt:10673]
>> >> >
>> >> >
>> >> > Hopefully this doesn't indicate a serious issue.
>> >> >
>> >> > On Sunday, 13 August 2017 18:42:59 UTC+1, quadstor wrote:
>> >> >>
>> >> >> There was a name clash with one of the kernel functions. Try
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://www.quadstor.com/virtentdub3z/quadstor-virt-3.2.12.2-debian7-x86_64.deb
>> >> >>
>> >> >> On Fri, Aug 11, 2017 at 8:50 PM, Mark Syms 
>> >> >> wrote:
>> >> >> > Doesn't appear to be fixed, on Debian 9 (stretch) with either the
>> >> >> > distro
>> >> >> > binary kernel and headers or a locally built 4.9.41 from the
>> >> >> > kernel.org
>> >> >> > git
>> >> >> > repo.
>> >> >> >
>> >> >> > Using quadstor-virt-3.2.12.1 the following occurs
>> >> >> >
>> >> >> > #> /quadstor/bin/builditf
>> >> >> > rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d
>> >> >> > Module.*
>> >> >> > cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>> >> >> > make -C /lib/modules/4.9.41/build SUBDIRS=/quadstor/src/export
>> >> >> > modules
>> >> >> > make[1]: Entering directory '/usr/src/linux-headers-4.9.41'
>> >> >> >   CC [M]  /quadstor/src/export/ldev_linux.o
>> >> >> >   CC [M]  /quadstor/src/export/devq.o
>> >> >> >   LD [M]  /quadstor/src/export/ldev.o
>> >> >> >   CC [M]  /quadstor/src/export/core_linux.o
>> >> >> > /quadstor/src/export/core_linux.c:1172:1: error: static
>> >> >> > declaration
>> >> >> > of
>> >> >> > ‘bio_free_pages’ follows non-static decln
>> >> >> >  bio_free_pages(bio_t *bio)
>> >> >> >  ^~
>> >> >> > In file included from ./include/linux/blkdev.h:19:0,
>> >> >> >  from /quadstor/src/export/linuxdefs.h:5,
>> >> >> >  from /quadstor/src/export/core_linux.c:19:
>> >> >> > ./include/linux/bio.h:462:13: note: previous declaration of
>> >> >> > ‘bio_free_pages’
>> >> >> > was here
>> >> >> >  extern void bio_free_pages(struct bio *bio);
>> >> >> >  ^~
>> >> >> > scripts/Makefile.build:293: recipe for target
>> >> >> > '/quadstor/src/export/core_linux.o' failed
>> >> >> > make[2]: *** [/quadstor/src/export/core_linux.o] Error 1
>> >> >> > Makefile:1493: recipe for target '_module_/quadstor/src/export'
>> >> >> > failed
>> >> >> > make[1]: *** [_module_/quadstor/src/export] Error 2
>> >> >> > make[1]: Leaving directory '/usr/src/linux-headers-4.9.41'
>> >> >> > Makefile:28: recipe for target 'default' failed
>> >> >> > make: *** [default] Error 2
>> >> >> > ERROR: Building kernel modules failed!
>> >> >> >
>> >> >> >
>> >> >> > So, there appears to be a header mismatch somewhere.
>> >> >> >
>> >> >> > Thanks,
>> >> >> >
>> >> >> > On Wednesday, 19 April 2017 20:13:39 UTC+1, quadstor wrote:
>> >> >> >>
>> >> >> >> The build error should now be fixed. The software can be built
>> >> >> >> against
>> >> >> >> 4.9.x kernels
>> >> >> >>
>> >> >> >> The next release can be downloaded from
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
>> >> >> >>
>> >> >> >> On Tue, Apr 11, 2017 at 2:31 PM, Dmitry Polezhaev 
>> >> >> >> wrote:
>> >> >> >> > I'm using Debian Jessie also, but without PROXMOX. And my
>> >> >> >> > versions
>> >> >> >> > are
>> >> >> >> > completely older:
>> >> >> >> >>
>> >> >> >> >> # cat /etc/debian_version
>> >> >> >> >> 8.7
>> >> >> >> >> # uname -a
>> >> >> >> >> Linux labstor01.lab.local 3.16.0-4-amd64 #1 SMP Debian
>> >> >> >> >> 3.16.39-1+deb8u2
>> >> >> >> >> (2017-03-07) x86_64 GNU/Linux
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > --
>> >> >> >> > You received this message because you are subscribed to the
>> >> >> >> > Google
>> >> >> >> > Groups
>> >> >> >> > "QUADStor Storage Virtualization" group.
>> >> >> >> > To unsubscribe from this group and stop receiving emails from
>> >> >> >> > it,
>> >> >> >> > send
>> >> >> >> > an
>> >> >> >> > email to quadstor-vir...@googlegroups.com.
>> >>

Re: [quadstor-virt] Quadstor incredibly slow/not working after unclean shutdown

2017-09-05 Thread QUADStor Support
Please send the diagnostics to supp...@quadstor.com

What was the reason for the shutdown ? gdevq is the thread which
processes incoming requests, and if that is blocked it could indicate
a problem with the underlying storage.

On Tue, Sep 5, 2017 at 4:54 PM,   wrote:
> I am running Quadstor Storage Virtualization 3.2.12.1 on Debian 7 as iSCSI
> target for my XenServer 7 servers. After an unclean shutdown, Quadstor
> suddenly slows down the entire machine to a grinding halt, up to a point
> where there are multiple messages that 'gdevq' blocked for more than 120
> seconds. The XenServers also cannot access the files on the Quadstor
> machine.
>
> After uninstalling Quadstor, the machine is as fast as it has always been.
> Reinstalling Quadstor didn't change that, but after rescanning the database,
> all the symptoms came back and I still can't access my data.
>
> Does anyone have any idea what to do? I can't access any of my data at the
> moment.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Quadstor incredibly slow/not working after unclean shutdown

2017-09-05 Thread QUADStor Support
Please send across the following files

/var/log/dmesg*
/var/log/kern.log*
/var/log/syslog*

And the output of
dmesg
top -b -d 1 -n 60

On Tue, Sep 5, 2017 at 5:31 PM,   wrote:
> Due to the load, I unfortunately can't access the GUI. Is there another way
> to create the diagnostics?
>
> The shutdown was due to a power outage. The underlying storage is a hardware
> RAID 1 on PERC 6/i; the RAID BIOS tells me everything is fine with both
> disks. After booting the server after the power outage, the RAID controller
> was busy writing it's BBU cache to disk for approx. a minute.
>
> Op dinsdag 5 september 2017 13:45:54 UTC+2 schreef quadstor:
>>
>> Please send the diagnostics to sup...@quadstor.com
>>
>> What was the reason for the shutdown ? gdevq is the thread which
>> processes incoming requests, and if that is blocked it could indicate
>> a problem with the underlying storage.
>>
>> On Tue, Sep 5, 2017 at 4:54 PM, wrote:
>> > I am running Quadstor Storage Virtualization 3.2.12.1 on Debian 7 as
>> > iSCSI
>> > target for my XenServer 7 servers. After an unclean shutdown, Quadstor
>> > suddenly slows down the entire machine to a grinding halt, up to a point
>> > where there are multiple messages that 'gdevq' blocked for more than 120
>> > seconds. The XenServers also cannot access the files on the Quadstor
>> > machine.
>> >
>> > After uninstalling Quadstor, the machine is as fast as it has always
>> > been.
>> > Reinstalling Quadstor didn't change that, but after rescanning the
>> > database,
>> > all the symptoms came back and I still can't access my data.
>> >
>> > Does anyone have any idea what to do? I can't access any of my data at
>> > the
>> > moment.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Quadstor incredibly slow/not working after unclean shutdown

2017-11-06 Thread QUADStor Support
Do you have a stack trace for the crash. We are interested in the
reason for the crash.

The problem was attributed to the slowness in loading the
deduplication tables. When a master disk is added to a pool the
initial set of deduplication tables are sequential on disk and these
are quickly loaded back into memory on a reboot. The size of these
tables are around 1/8 the size of the memory. The second set of
deduplication tables are added on demand when more table entries are
needed and the first set of tables are full. The problem is with the
second set, the location of these tables can be spread across the
disk. For your setup as per our calculations around 2GB are the
sequential tables and 14GB are the second set. The disk is a RAID 1
disk and it seems that with just the max speed of two disks reading
back 14GB of data which are mainly 4K blocks can take many minutes to
hours.

There are two things which we are doing.
1. Create all the deduplication tables sequentially. This will mean
that only one storage pool can maintain the deduplication tables
(mostly the Default pool). Most configurations are this way => This
change is complete
2. For existing implementation we realign the random blocks to
sequential blocks using a command line tool => This is still being
implemented and ETA is the end of this month. You would need this
feature


On Mon, Nov 6, 2017 at 1:42 PM,   wrote:
> What's the status on the fix? The server just crashed and rebooting is going
> to take a couple of hours again due to this bug. This is unacceptable in a
> production environment.
>
> Op dinsdag 5 september 2017 14:07:58 UTC+2 schreef quadstor:
>>
>> Please send across the following files
>>
>> /var/log/dmesg*
>> /var/log/kern.log*
>> /var/log/syslog*
>>
>> And the output of
>> dmesg
>> top -b -d 1 -n 60
>>
>> On Tue, Sep 5, 2017 at 5:31 PM,   wrote:
>> > Due to the load, I unfortunately can't access the GUI. Is there another
>> > way
>> > to create the diagnostics?
>> >
>> > The shutdown was due to a power outage. The underlying storage is a
>> > hardware
>> > RAID 1 on PERC 6/i; the RAID BIOS tells me everything is fine with both
>> > disks. After booting the server after the power outage, the RAID
>> > controller
>> > was busy writing it's BBU cache to disk for approx. a minute.
>> >
>> > Op dinsdag 5 september 2017 13:45:54 UTC+2 schreef quadstor:
>> >>
>> >> Please send the diagnostics to sup...@quadstor.com
>> >>
>> >> What was the reason for the shutdown ? gdevq is the thread which
>> >> processes incoming requests, and if that is blocked it could indicate
>> >> a problem with the underlying storage.
>> >>
>> >> On Tue, Sep 5, 2017 at 4:54 PM, wrote:
>> >> > I am running Quadstor Storage Virtualization 3.2.12.1 on Debian 7 as
>> >> > iSCSI
>> >> > target for my XenServer 7 servers. After an unclean shutdown,
>> >> > Quadstor
>> >> > suddenly slows down the entire machine to a grinding halt, up to a
>> >> > point
>> >> > where there are multiple messages that 'gdevq' blocked for more than
>> >> > 120
>> >> > seconds. The XenServers also cannot access the files on the Quadstor
>> >> > machine.
>> >> >
>> >> > After uninstalling Quadstor, the machine is as fast as it has always
>> >> > been.
>> >> > Reinstalling Quadstor didn't change that, but after rescanning the
>> >> > database,
>> >> > all the symptoms came back and I still can't access my data.
>> >> >
>> >> > Does anyone have any idea what to do? I can't access any of my data
>> >> > at
>> >> > the
>> >> > moment.
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "QUADStor Storage Virtualization" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to quadstor-vir...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Quadstor incredibly slow/not working after unclean shutdown

2017-11-30 Thread QUADStor Support
Please try 3.2.13 from
http://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html

The procedure will be

1. Uninstall the old package (dpkg -r quadstor-virt)
2. Install the new package (dpkg -i
quadstor-virt-3.2.13-debian7-x86_64.deb). Do not restart or attempt to
start the quadstor service as that will take the usual amount of time.
3. /quadstor/pgsql/etc/pgsql start
4. echo "update sysinfo set dalign=1" | /quadstor/pgsbin/psql -U scdbuser qsdb
5. service quadstor start

A few points to remember
a. Please ensure that there are backups done
b. The downtime will be atleast twice the time taken to start the
quadstor service
c. Send us the diagnostic logs after step 5
d. In case you run into issues leave the system as is (no
restart/reboot etc) and send us an email to supp...@quadstor.com



On Mon, Nov 6, 2017 at 4:48 PM,   wrote:
> I unfortunately do not have a stack trace. It seems however that the crash
> itself was not necessarily caused by the Quadstor software. The real problem
> lies within the fact that rebooting the server (for any reason) is
> impossible because it takes hours to come back up.
>
> I can confirm the issue is caused by the loading of the deduplication
> tables; ddloadt is taking up 99,99% of IO according to iotop.
>
> As there is nothing I can do about it, I will just have to wait for the
> deduplication tables to be loaded. I look forward to the release of the new
> command line tool.
>
> Op maandag 6 november 2017 12:08:37 UTC+1 schreef quadstor:
>>
>> Do you have a stack trace for the crash. We are interested in the
>> reason for the crash.
>>
>> The problem was attributed to the slowness in loading the
>> deduplication tables. When a master disk is added to a pool the
>> initial set of deduplication tables are sequential on disk and these
>> are quickly loaded back into memory on a reboot. The size of these
>> tables are around 1/8 the size of the memory. The second set of
>> deduplication tables are added on demand when more table entries are
>> needed and the first set of tables are full. The problem is with the
>> second set, the location of these tables can be spread across the
>> disk. For your setup as per our calculations around 2GB are the
>> sequential tables and 14GB are the second set. The disk is a RAID 1
>> disk and it seems that with just the max speed of two disks reading
>> back 14GB of data which are mainly 4K blocks can take many minutes to
>> hours.
>>
>> There are two things which we are doing.
>> 1. Create all the deduplication tables sequentially. This will mean
>> that only one storage pool can maintain the deduplication tables
>> (mostly the Default pool). Most configurations are this way => This
>> change is complete
>> 2. For existing implementation we realign the random blocks to
>> sequential blocks using a command line tool => This is still being
>> implemented and ETA is the end of this month. You would need this
>> feature
>>
>>
>> On Mon, Nov 6, 2017 at 1:42 PM,  wrote:
>> > What's the status on the fix? The server just crashed and rebooting is
>> > going
>> > to take a couple of hours again due to this bug. This is unacceptable in
>> > a
>> > production environment.
>> >
>> > Op dinsdag 5 september 2017 14:07:58 UTC+2 schreef quadstor:
>> >>
>> >> Please send across the following files
>> >>
>> >> /var/log/dmesg*
>> >> /var/log/kern.log*
>> >> /var/log/syslog*
>> >>
>> >> And the output of
>> >> dmesg
>> >> top -b -d 1 -n 60
>> >>
>> >> On Tue, Sep 5, 2017 at 5:31 PM,   wrote:
>> >> > Due to the load, I unfortunately can't access the GUI. Is there
>> >> > another
>> >> > way
>> >> > to create the diagnostics?
>> >> >
>> >> > The shutdown was due to a power outage. The underlying storage is a
>> >> > hardware
>> >> > RAID 1 on PERC 6/i; the RAID BIOS tells me everything is fine with
>> >> > both
>> >> > disks. After booting the server after the power outage, the RAID
>> >> > controller
>> >> > was busy writing it's BBU cache to disk for approx. a minute.
>> >> >
>> >> > Op dinsdag 5 september 2017 13:45:54 UTC+2 schreef quadstor:
>> >> >>
>> >> >> Please send the diagnostics to sup...@quadstor.com
>> >> >>
>> >> >> What was the reason for the shutdown ? gdevq is the thread which
>> >> >> processes incoming requests, and if that is blocked it could
>> >> >> indicate
>> >> >> a problem with the underlying storage.
>> >> >>
>> >> >> On Tue, Sep 5, 2017 at 4:54 PM, wrote:
>> >> >> > I am running Quadstor Storage Virtualization 3.2.12.1 on Debian 7
>> >> >> > as
>> >> >> > iSCSI
>> >> >> > target for my XenServer 7 servers. After an unclean shutdown,
>> >> >> > Quadstor
>> >> >> > suddenly slows down the entire machine to a grinding halt, up to a
>> >> >> > point
>> >> >> > where there are multiple messages that 'gdevq' blocked for more
>> >> >> > than
>> >> >> > 120
>> >> >> > seconds. The XenServers also cannot access the files on the
>> >> >> > Quadstor
>> >> >> > machine.
>> >> >> >
>> >> >> > After uninstalling Quadstor, the 

[quadstor-virt] QLogic firmware version 8.06 in the latest RHEL/CentOS

2017-12-10 Thread QUADStor Support
The FC target-mode driver has issues with the 8.06 firmware installed
by linux-firmware-20170606-56.gitc990aae.el7.noarch on RHEL/CentOS
Linux release 7.3.1611 (Core)

The recent 8.07 firmware is recommended which can be got from
https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/

Firmware can also be updated using the following links
http://www.quadstor.com/vtlnext/firmware.tgz
http://www.quadstor.com/vtlnext/firmware2400.tgz

Firmware upgrade procedure

On the VTL system as root user
1. tar xvzf firmware.tgz (For the qla/qle24xx cards use tar xvzf
firmware2400.tgz)
2. sh firmware/copyfirmware.sh
3. Reboot
4. grep "fw=" /var/log/dmesg This should be fw=8.07.00 or greater

The firmware update is needed for or 24xx and 25xx. A firmware update
in general is recommended irrespective of the OS distribution

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Deduplication metadata on a separate fast disk. How can I verify that?

2018-02-08 Thread QUADStor Support
If the pool "Enable Dedupe Metadata" is disabled then there are no
tables initialized for that pool. When you try to add a disk to this
pool without a disk configured in the Default pool the disk addition
will fail. This is because there is no way to store the deduplication
entries for the vdisks  in the pool.

Alternatively when you add a disk to a pool with "Enable Dedupe
Metadata" the initial used space of the disk is much higher than when
adding the first disk to a pool with "Enable Dedupe Metadata"
unchecked.

On Tue, Feb 6, 2018 at 8:23 PM, Aleksey Maksimov
 wrote:
> Hello QUADStor Team!
>
> QUADStor Storage Virtualization 3.2.13 on Debian GNU/Linux 9.3 (stretch)
>
>
> 1) I renamed "Default" pool to "SSD-Metadata-Pool". In this pool I added a
> fast SSD drive
> 2) Then I created a new pool "HDD-Backup-Pool" and added to it the slow
> disks. This pool has disabled features "Enable Dedupe Metadata"/"Enable
> Logs"
> 3) Then in the pool "HDD-Backup-Pool" I created the "Backup-vDisk1" virtual
> disk with enabled option "Enable Deduplication"
>
> Now the question
>
> How can I be sure that the "HDD-Backup-Pool" deduplication metadata is on
> the fast disk of the "SSD-Metadata-Pool" pool ?
> How can I verify that?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Quadstor incredibly slow/not working after unclean shutdown

2018-02-13 Thread QUADStor Support
On Thu, Nov 30, 2017 at 3:53 PM, QUADStor Support  wrote:

> 4. echo "update sysinfo set dalign=1" | /quadstor/pgsbin/psql -U scdbuser qsdb
This should be

echo "update sysinfo set dalign=1" | /quadstor/pgsql/bin/psql -U scdbuser qsdb

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Dedup table limit reached

2018-02-16 Thread QUADStor Support
The limit on the number of deduplication tables has changed (reduced)
to 1 since the last release. The documentation is fixed now to reflect
this change. Please see "Enabling dedupe metadata for a pool" in
https://www.quadstor.com/storage-pools.html

To explain the behavior, we now try to create the all deduplication
tables during disk initialization itself. Keeps the table sequential
on disk and very little random IO in the process. However since we
create the tables beforehand we can only assign the metadata to one
pool. So the preferred way to create will be to assign storage to the
"Default" pool first and then all other pools with store the
deduplication entries in the default pool.


To revert back to the old behavior

Edit /quadstor/etc/quadstor.conf and add the following line
SequentialTables=0
service quadstor restart

But you will have to remove the configured disks and then try adding them again.


On Fri, Feb 16, 2018 at 7:58 PM, Fabien Rouach  wrote:
> Hi,
> I've created a first pool (Dedup Disk and compression enabled) and added a
> disk
> Created a second pool (same settings), when trying to add a disk, got this
> error message :
> ERROR: Reply from server is "Pool Internal_5TB needs to maintain its own
> dedupe tables, but dedupe tables limit reached current tables 1 max 1"
> What I'm trying to do:
> Having 2 pool, totally independent from each other, so i can loose one,
> without loosing data of the second.
> And be able to move a disk pool to another system.
>
> I'm a bit lost about how dedup tables are managed and what should be the
> good configuration to get that
>
> When transferring a pool to another system do I need to have it's dedup
> tables too?
> If not, i could add another disk i have to default pool (no disk in there
> actually) and let dedup tables in default pool, would it be the right way to
> do it?
>
> Thanks
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] ISCI volume with NTFS blocked on Read Only

2018-03-07 Thread QUADStor Support
Please send across the /var/log/messages (/var/log/syslog and
/var/log/kern.log if on Debian) to us at supp...@quadstor.com

On Thu, Mar 8, 2018 at 2:15 AM, Eric GROULT  wrote:
> Hello,
>
> The setup is a MS VM 2008R2 on esxserver 5.5. A SCSI volume from QUADSTOR
> 3.2.13 is attached directly to the 2008R2 VM.
>
> Since a crash from the MS VM 2008R2, the ISCSI volume is on READ ONLY.
>
> i've test with diskpart to disable the read only but without succes.
>
> When i when to generate the DEBUG package, it said
> -> ERROR: Error packing diagnostics file
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Very low perf (vs vtl)

2018-03-14 Thread QUADStor Support
Please send the diagnostics file.

Also a btrace output
btrace -w 120   > /tmp/btrace.out

Where  will be the device path of the RAID device configured
in the storage pool. If there are multiple devices please run btrace
against all devices configured in the pool. Also btrace needs to be
run during the backup when its slow.

On Wed, Mar 14, 2018 at 7:55 AM, Fabien Rouach  wrote:
> Hi
> I've bee doing several test on quadstor vtl, and had nice perf (close to
> 100MB/s sometimes)
> decided to switch to disk backup using quadstor storage, and perf are very
> low on same hardware (only a couple of MB/s)
>
> using HP servers
> DL380 g6 / smart array p410
> DL380 g8 / smart P420 or P822
>
> sata drives (12x 4TB 5900rpm or 22x 1TB 7200rpm) raid 5 or 6
>
> all accesses or done through 4Gb/s FC
>
> once switches to quadstor storage, i can see on the quadstor system in
> iostat that disk at 100% almost all the time, with between 500KB and
> sometime up to 7-8MB transfer (but more rare)
>
> could it be something related with block size the storage is writing,
> compared to VTL to make the RAID controler saturating or something likt
> that?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Very low perf (vs vtl)

2018-03-16 Thread QUADStor Support
btrace is installed by the blktrace package. Its a script which runs
blktrace and passes the output to blkparse.

yum install -y blktrace

Then btrace should be in /bin/ and in /usr/bin


On Thu, Mar 15, 2018 at 12:03 AM, Fabien Rouach  wrote:
> I don't have btrace on my centos 7 server, what should I install?
> is blktrace doing the same thing?
>
>
> Le mercredi 14 mars 2018 07:40:50 UTC-4, quadstor a écrit :
>>
>> Please send the diagnostics file.
>>
>> Also a btrace output
>> btrace -w 120   > /tmp/btrace.out
>>
>> Where  will be the device path of the RAID device configured
>> in the storage pool. If there are multiple devices please run btrace
>> against all devices configured in the pool. Also btrace needs to be
>> run during the backup when its slow.
>>
>> On Wed, Mar 14, 2018 at 7:55 AM, Fabien Rouach 
>> wrote:
>> > Hi
>> > I've bee doing several test on quadstor vtl, and had nice perf (close to
>> > 100MB/s sometimes)
>> > decided to switch to disk backup using quadstor storage, and perf are
>> > very
>> > low on same hardware (only a couple of MB/s)
>> >
>> > using HP servers
>> > DL380 g6 / smart array p410
>> > DL380 g8 / smart P420 or P822
>> >
>> > sata drives (12x 4TB 5900rpm or 22x 1TB 7200rpm) raid 5 or 6
>> >
>> > all accesses or done through 4Gb/s FC
>> >
>> > once switches to quadstor storage, i can see on the quadstor system in
>> > iostat that disk at 100% almost all the time, with between 500KB and
>> > sometime up to 7-8MB transfer (but more rare)
>> >
>> > could it be something related with block size the storage is writing,
>> > compared to VTL to make the RAID controler saturating or something likt
>> > that?
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Very low perf (vs vtl)

2018-03-16 Thread QUADStor Support
Please send the logs to supp...@quadstor.com

On Thu, Mar 15, 2018 at 12:18 AM, QUADStor Support  wrote:
> btrace is installed by the blktrace package. Its a script which runs
> blktrace and passes the output to blkparse.
>
> yum install -y blktrace
>
> Then btrace should be in /bin/ and in /usr/bin
>
>
> On Thu, Mar 15, 2018 at 12:03 AM, Fabien Rouach  
> wrote:
>> I don't have btrace on my centos 7 server, what should I install?
>> is blktrace doing the same thing?
>>
>>
>> Le mercredi 14 mars 2018 07:40:50 UTC-4, quadstor a écrit :
>>>
>>> Please send the diagnostics file.
>>>
>>> Also a btrace output
>>> btrace -w 120   > /tmp/btrace.out
>>>
>>> Where  will be the device path of the RAID device configured
>>> in the storage pool. If there are multiple devices please run btrace
>>> against all devices configured in the pool. Also btrace needs to be
>>> run during the backup when its slow.
>>>
>>> On Wed, Mar 14, 2018 at 7:55 AM, Fabien Rouach 
>>> wrote:
>>> > Hi
>>> > I've bee doing several test on quadstor vtl, and had nice perf (close to
>>> > 100MB/s sometimes)
>>> > decided to switch to disk backup using quadstor storage, and perf are
>>> > very
>>> > low on same hardware (only a couple of MB/s)
>>> >
>>> > using HP servers
>>> > DL380 g6 / smart array p410
>>> > DL380 g8 / smart P420 or P822
>>> >
>>> > sata drives (12x 4TB 5900rpm or 22x 1TB 7200rpm) raid 5 or 6
>>> >
>>> > all accesses or done through 4Gb/s FC
>>> >
>>> > once switches to quadstor storage, i can see on the quadstor system in
>>> > iostat that disk at 100% almost all the time, with between 500KB and
>>> > sometime up to 7-8MB transfer (but more rare)
>>> >
>>> > could it be something related with block size the storage is writing,
>>> > compared to VTL to make the RAID controler saturating or something likt
>>> > that?
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> > Groups
>>> > "QUADStor Storage Virtualization" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> > an
>>> > email to quadstor-vir...@googlegroups.com.
>>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "QUADStor Storage Virtualization" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to quadstor-virt+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Dedup table limit reached

2018-04-19 Thread QUADStor Support
Please send the diagnostics logs to supp...@quadstor.com.

The error indicates that the change to quadstor.conf is not being picked up

On Thu, Apr 19, 2018 at 11:26 PM, Fabien Rouach  wrote:
> I've done the changes (rebuilt a system from scratch)
> noe got that error when adding disk to the second pool:
> ERROR: Reply from server is "Pool D6000_Test needs to maintain its own
> dedupe tables, but dedupe tables limit reached, ddlookup count 8388608 max
> roots 2097152 max 9207291"
>
> Note that my first pool include 1 disk of 54.5 TB
>
> Le vendredi 16 février 2018 09:57:59 UTC-5, quadstor a écrit :
>>
>> The limit on the number of deduplication tables has changed (reduced)
>> to 1 since the last release. The documentation is fixed now to reflect
>> this change. Please see "Enabling dedupe metadata for a pool" in
>> https://www.quadstor.com/storage-pools.html
>>
>> To explain the behavior, we now try to create the all deduplication
>> tables during disk initialization itself. Keeps the table sequential
>> on disk and very little random IO in the process. However since we
>> create the tables beforehand we can only assign the metadata to one
>> pool. So the preferred way to create will be to assign storage to the
>> "Default" pool first and then all other pools with store the
>> deduplication entries in the default pool.
>>
>>
>> To revert back to the old behavior
>>
>> Edit /quadstor/etc/quadstor.conf and add the following line
>> SequentialTables=0
>> service quadstor restart
>>
>> But you will have to remove the configured disks and then try adding them
>> again.
>>
>>
>> On Fri, Feb 16, 2018 at 7:58 PM, Fabien Rouach 
>> wrote:
>> > Hi,
>> > I've created a first pool (Dedup Disk and compression enabled) and added
>> > a
>> > disk
>> > Created a second pool (same settings), when trying to add a disk, got
>> > this
>> > error message :
>> > ERROR: Reply from server is "Pool Internal_5TB needs to maintain its own
>> > dedupe tables, but dedupe tables limit reached current tables 1 max 1"
>> > What I'm trying to do:
>> > Having 2 pool, totally independent from each other, so i can loose one,
>> > without loosing data of the second.
>> > And be able to move a disk pool to another system.
>> >
>> > I'm a bit lost about how dedup tables are managed and what should be the
>> > good configuration to get that
>> >
>> > When transferring a pool to another system do I need to have it's dedup
>> > tables too?
>> > If not, i could add another disk i have to default pool (no disk in
>> > there
>> > actually) and let dedup tables in default pool, would it be the right
>> > way to
>> > do it?
>> >
>> > Thanks
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "QUADStor Storage Virtualization" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to quadstor-vir...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Web interface not working

2018-11-06 Thread QUADStor Support
Hi,

A restart is most likely needed. Proabably one of the kernel tasks
hung or the mdaemon  itself.

Please check for any core files under /
ls -l /core*

If you find any, please run a file command on the core file to see if
its related to 'mdaemon'. If yes Please try the following

yum install -y gdb
gdb /quadstor/sbin/mdaemon  -batch -bt > /tmp/trace.out

Then send us /tmp/trace.out

If there are no core files and if mdaemon is still running (ps -aef |
grep mdaemon) please try
yum install -y gdb
gstack  > /tmp/trace.out

Then send us /tmp/trace.out

Also please try the following and send us /var/log/messages
echo t > /proc/sysrq-trigger
sleep 30
On Tue, Nov 6, 2018 at 10:29 PM Fabien Rouach  wrote:
>
> On my quadstor 3.2.15 the web interface doesn't answer anymore, but anything 
> else still working.
> Any way to restart the web interface without restarting the whole server?
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups 
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Is there a by virtual disk io control?

2018-12-01 Thread QUADStor Support
Yes there are per vdisk and per physical disk limits to the available
resources. For the vdisk its the amount of metadata that can be cached
in memory (called AmapThreshold). The limit for the metadata is around
6% of the available memory (not configurable at the moment but we can
change this). Amap (Address Maps) are metadata blocks which maintain
the LBA mapping, we need this information when we need to read a block
asked by the host. The more these are in memory the better, but we try
to balance the number of amaps per vdisk so that one single vdisk does
not end up caching more there by decreasing the performance of the
others. However if only one vdisk is being used as in this case its
most likely that all the amaps in memory are for that vdisk.

Back to the performance numbers, there are other factors like how much
of the vdisk has valid data. For example the following will lead to
almost no io on the physical disks
1. zeroed data
2. LBA unmapped blocks (blocks which were never written into)
3. Deduplicated blocks which are aleady in a read cache we maintain

On Sat, Dec 1, 2018 at 11:43 PM Fabien Rouach  wrote:
>
> Hi,
> In a intensive read-only test situation, i have the following:
> My quadstor have several virtual disks configured, I'm only reading from one 
> at the time.
> The windows "client" machine shows disk active time at 100%
> In quadstor an iostat command show %util not over 70%
> lot of free ram (over 40GB)
> very low CPU usage
>
> Is there any kind of io control that limit usage of a single virtual disk to 
> save ressources for others or something like that?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to quadstor-virt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Physical disk pool almost full (about 90%)

2019-05-25 Thread QUADStor Support
You could try btrace and send it across

yum install -y blktrace
btrace -w 60 /dev/sda > /tmp/btrace.out

Replace /dev/sda with the default pool device path. Then send across
/tmp/btrace.out

The disk usage is probably because the service shutdown tries to sync
the deduplication tables back to disk. If the deduplication tables are
sequential this should not take much time but older installations or
installations with SequentialTables=0 in quadstor.conf can take a lot
more time to sync because while the initial tables maybe sequential on
disk the ones created later will not be and can take many minutes to
shutdown or reload the tables into memory.

An improvement for this behavior has gone recently into the VTL
product and we will be implementing this for the virtualization
product in a week or two and this should solve the slow shutdown/load.
We were able to bring down the load time from some of the extreme
cases (60+ minutes) to under 10 miutes

On Sat, May 25, 2019 at 1:48 PM Fabien Rouach  wrote:
>
> Hi,
> one of my physical disk went almost full (90%) (not in the default pool)
> I'm accessing through FC and LUN stop responding (offline in vmware)
> I'm using v3.2.15
> disconnected everything in fc and tried stopping service, nothing seems to 
> happen, but my default pool disk usage shows 100% continuously
>
> What could it be actually doing?
> is it safe to force shutdown the machine?
> I think i had read about the 10% overhead in the past, is there a way to 
> change that, even temporarly so i can do cleanup in the LUN?
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups 
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/quadstor-virt/d2eae24c-ea6a-40f0-b7b6-186c327ad946%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnnXZS55QdXLt475LfEay8hPFhX8u2n3spDLQJ1is4XbbQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[quadstor-virt] Release 3.2.17

2019-05-29 Thread QUADStor Support
Mainly bugfixes from the VTL code has been merged into this release.
Two changes in this release
1. Deduplication tables now sync faster during service shutdown and
load back faster during startup.
2. Due to an incorrect socket shutdown call, threads waiting for
sockets to shutdown might wait indefinitely. This has been fixed.


-- 
Virtual Tape Library (VTL) https://www.quadstor.com/virtual-tape-library.html
VTL software download
https://www.quadstor.com/vtl-extended-edition-downloads.html
VTL documentation https://www.quadstor.com/vtl-documentation.html

Storage Virtualization with VAAI
https://www.quadstor.com/storage-virtualization.html
Virtualization software download
https://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
Virtualization documentation
http://www.quadstor.com/storage-virtualization-documentation.html

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnk%2BC6qsmi2NtNu%2Bi%3DtAW5sLHM_Gc8F7QrnDaKiJ7nNQ2A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [quadstor-virt] Re: Building kernel modules failed!

2019-06-05 Thread QUADStor Support
Hi,

We need to fix a few build issues for kernels >= 4.14.x.  It might
take us a day or two.


On Wed, Jun 5, 2019 at 10:49 PM Louise Li  wrote:
>
> This is the year ago message. but I got the same problem now. My OS version 
> is ubuntu 18.04.2.  What I need to update ?
>
>
> On Monday, March 6, 2017 at 6:42:20 AM UTC-6, Duh wrote:
>>
>> Hello, module build failed on proxmox-ve host based on debian jessie
>>
>> root@b11:~# cat /etc/debian_version
>> 8.7
>> root@b11:~# uname -a
>> Linux b11 4.4.44-1-pve #1 SMP PVE 4.4.44-83 (Wed, 1 Mar 2017 09:22:35 +0100) 
>> x86_64 GNU/Linux
>> root@b11:~# dpkg -l | grep pve-headers
>> ii  pve-headers-4.4.44-1-pve 4.4.44-83  
>> amd64The Proxmox PVE Kernel Headers
>> root@b11:~# dpkg -l | grep pve-kernel
>> ii  pve-firmware 1.1-11 all  
>> Binary firmware code for the pve-kernel
>> ii  pve-kernel-4.4.44-1-pve  4.4.44-83  
>> amd64The Proxmox PVE Kernel Image
>> root@b11:~# dpkg -l | grep quadstor
>> ii  quadstor-virt3.2.11 
>> amd64QUADStor storage virtualization enterprise edition
>>
>> pve-kernel and headers are based on Ubuntu-4.4.0-63.84 packages
>>
>> build log follows...
>>
>> + [ ! -f /quadstor/lib/modules/corelib.o ]
>> + uname
>> + os=Linux
>> + cd /quadstor/src/export
>> + make clean
>> rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d Module.*
>> + make
>> cp /quadstor/lib/modules/corelib.o /quadstor/src/export
>> make -C /lib/modules/4.4.44-1-pve/build SUBDIRS=/quadstor/src/export modules
>> make[1]: Entering directory '/usr/src/linux-headers-4.4.44-1-pve'
>>  CC [M]  /quadstor/src/export/ldev_linux.o
>>  CC [M]  /quadstor/src/export/devq.o
>>  LD [M]  /quadstor/src/export/ldev.o
>>  CC [M]  /quadstor/src/export/core_linux.o
>> In file included from include/scsi/scsi_cmnd.h:10:0,
>> from /quadstor/src/export/core_linux.c:26:
>> include/scsi/scsi_device.h:223:3: warning: ‘printk’ is an unrecognized 
>> format function type [-Wformat=]
>>   const char *, ...);
>>   ^
>> include/scsi/scsi_device.h:229:40: warning: ‘printk’ is an unrecognized 
>> format function type [-Wformat=]
>> scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
>>^
>> In file included from include/uapi/linux/in.h:23:0,
>> from include/linux/in.h:23,
>> from /quadstor/src/export/linuxdefs.h:18,
>> from /quadstor/src/export/core_linux.c:19:
>> /quadstor/src/export/core_linux.c: In function ‘sys_sock_create’:
>> include/linux/socket.h:163:18: warning: passing argument 1 of 
>> ‘sock_create_kern’ makes pointer from integer without a cast
>> #define AF_INET  2 /* Internet IP Protocol  */
>>  ^
>> /quadstor/src/export/core_linux.c:174:28: note: in expansion of macro 
>> ‘AF_INET’
>>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, 
>> &sys_sock->sock);
>>^
>> In file included from include/linux/skbuff.h:29:0,
>> from include/linux/tcp.h:21,
>> from /quadstor/src/export/linuxdefs.h:19,
>> from /quadstor/src/export/core_linux.c:19:
>> include/linux/net.h:216:5: note: expected ‘struct net *’ but argument is of 
>> type ‘int’
>> int sock_create_kern(struct net *net, int family, int type, int proto, 
>> struct socket **res);
>> ^
>> /quadstor/src/export/core_linux.c:174:63: warning: passing argument 4 of 
>> ‘sock_create_kern’ makes integer from pointer without a cast
>>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, 
>> &sys_sock->sock);
>>   ^
>> In file included from include/linux/skbuff.h:29:0,
>> from include/linux/tcp.h:21,
>> from /quadstor/src/export/linuxdefs.h:19,
>>  from /quadstor/src/export/core_linux.c:19:
>> include/linux/net.h:216:5: note: expected ‘int’ but argument is of type 
>> ‘struct socket **’
>> int sock_create_kern(struct net *net, int family, int type, int proto, 
>> struct socket **res);
>> ^
>> /quadstor/src/export/core_linux.c:174:11: error: too few arguments to 
>> function ‘sock_create_kern’
>>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, 
>> &sys_sock->sock);
>>   ^
>> In file included from include/linux/skbuff.h:29:0,
>> from include/linux/tcp.h:21,
>> from /quadstor/src/export/linuxdefs.h:19,
>> from /quadstor/src/export/core_linux.c:19:
>> include/linux/net.h:216:5: note: declared here
>> int sock_create_kern(struct net *net, int family, int type, int proto, 
>> struct socket **res);
>> ^
>> /quadstor/src/export/core_linux.c: In function ‘g_new_bio’:
>> /quadstor/src/export/core_linux.c:1127:17: warning: assignment from 
>> incompati

Re: [quadstor-virt] Re: Building kernel modules failed!

2019-06-10 Thread QUADStor Support
Hi,

This might take longer than we initially thought. The number of build
failures are quite high across modules. Its probably better to go with
Ubuntu 16.04 for now

On Thu, Jun 6, 2019 at 2:32 AM QUADStor Support  wrote:
>
> Hi,
>
> We need to fix a few build issues for kernels >= 4.14.x.  It might
> take us a day or two.
>
>
> On Wed, Jun 5, 2019 at 10:49 PM Louise Li  
> wrote:
> >
> > This is the year ago message. but I got the same problem now. My OS version 
> > is ubuntu 18.04.2.  What I need to update ?
> >
> >
> > On Monday, March 6, 2017 at 6:42:20 AM UTC-6, Duh wrote:
> >>
> >> Hello, module build failed on proxmox-ve host based on debian jessie
> >>
> >> root@b11:~# cat /etc/debian_version
> >> 8.7
> >> root@b11:~# uname -a
> >> Linux b11 4.4.44-1-pve #1 SMP PVE 4.4.44-83 (Wed, 1 Mar 2017 09:22:35 
> >> +0100) x86_64 GNU/Linux
> >> root@b11:~# dpkg -l | grep pve-headers
> >> ii  pve-headers-4.4.44-1-pve 4.4.44-83  
> >> amd64The Proxmox PVE Kernel Headers
> >> root@b11:~# dpkg -l | grep pve-kernel
> >> ii  pve-firmware 1.1-11 
> >> all  Binary firmware code for the pve-kernel
> >> ii  pve-kernel-4.4.44-1-pve  4.4.44-83  
> >> amd64The Proxmox PVE Kernel Image
> >> root@b11:~# dpkg -l | grep quadstor
> >> ii  quadstor-virt3.2.11 
> >> amd64QUADStor storage virtualization enterprise edition
> >>
> >> pve-kernel and headers are based on Ubuntu-4.4.0-63.84 packages
> >>
> >> build log follows...
> >>
> >> + [ ! -f /quadstor/lib/modules/corelib.o ]
> >> + uname
> >> + os=Linux
> >> + cd /quadstor/src/export
> >> + make clean
> >> rm -rf *.o *.ko .*.cmd *.mod.c .tmp_versions Module.symvers .*.o.d Module.*
> >> + make
> >> cp /quadstor/lib/modules/corelib.o /quadstor/src/export
> >> make -C /lib/modules/4.4.44-1-pve/build SUBDIRS=/quadstor/src/export 
> >> modules
> >> make[1]: Entering directory '/usr/src/linux-headers-4.4.44-1-pve'
> >>  CC [M]  /quadstor/src/export/ldev_linux.o
> >>  CC [M]  /quadstor/src/export/devq.o
> >>  LD [M]  /quadstor/src/export/ldev.o
> >>  CC [M]  /quadstor/src/export/core_linux.o
> >> In file included from include/scsi/scsi_cmnd.h:10:0,
> >> from /quadstor/src/export/core_linux.c:26:
> >> include/scsi/scsi_device.h:223:3: warning: ‘printk’ is an unrecognized 
> >> format function type [-Wformat=]
> >>   const char *, ...);
> >>   ^
> >> include/scsi/scsi_device.h:229:40: warning: ‘printk’ is an unrecognized 
> >> format function type [-Wformat=]
> >> scmd_printk(const char *, const struct scsi_cmnd *, const char *, ...);
> >>^
> >> In file included from include/uapi/linux/in.h:23:0,
> >> from include/linux/in.h:23,
> >> from /quadstor/src/export/linuxdefs.h:18,
> >> from /quadstor/src/export/core_linux.c:19:
> >> /quadstor/src/export/core_linux.c: In function ‘sys_sock_create’:
> >> include/linux/socket.h:163:18: warning: passing argument 1 of 
> >> ‘sock_create_kern’ makes pointer from integer without a cast
> >> #define AF_INET  2 /* Internet IP Protocol  */
> >>  ^
> >> /quadstor/src/export/core_linux.c:174:28: note: in expansion of macro 
> >> ‘AF_INET’
> >>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, 
> >> &sys_sock->sock);
> >>^
> >> In file included from include/linux/skbuff.h:29:0,
> >> from include/linux/tcp.h:21,
> >> from /quadstor/src/export/linuxdefs.h:19,
> >> from /quadstor/src/export/core_linux.c:19:
> >> include/linux/net.h:216:5: note: expected ‘struct net *’ but argument is 
> >> of type ‘int’
> >> int sock_create_kern(struct net *net, int family, int type, int proto, 
> >> struct socket **res);
> >> ^
> >> /quadstor/src/export/core_linux.c:174:63: warning: passing argument 4 of 
> >> ‘sock_create_kern’ makes integer from pointer without a cast
> >>  retval = sock_create_kern(AF_INET, SOCK_STREAM, IPPROTO_TCP, 
> >> &sys_sock->sock);
> >>   

Re: [quadstor-virt] Cannot find a free bint for tdisk oraclevm_test1 size 4096

2019-09-21 Thread QUADStor Support
The writes are failing because the available free physical space is low.
Since these are VMs maybe an unmap will help.

The site is experiencing problems. We will check why

On Sat, Sep 21, 2019 at 7:21 PM Sergey Ganchuk  wrote:

> Hi everybody.
> My quadstor has ran into this problem:
> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
> find a free bint for tdisk oraclevm_test1 size 4096
> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
> a free bint for size 4096
> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
> Default free 1571160747520
> Sep 21 13:12:09 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
> alloc block failed for size 4096
> Sep 21 13:12:12 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
> blocks failed for size 4096
> Sep 21 13:12:12 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
> data failed for lba 5451 transfer length 1
> Sep 21 13:12:12 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
> find a free bint for tdisk oraclevm_test1 size 4096
> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
> a free bint for size 4096
> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
> Default free 1571160747520
> Sep 21 13:12:12 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
> alloc block failed for size 4096
> Sep 21 13:12:15 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
> blocks failed for size 4096
> Sep 21 13:12:15 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
> data failed for lba 5451 transfer length 1
> Sep 21 13:12:15 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
> find a free bint for tdisk oraclevm_test1 size 4096
> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
> a free bint for size 4096
> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
> Default free 1571160747520
> Sep 21 13:12:15 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
> alloc block failed for size 4096
> Sep 21 13:12:18 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
> blocks failed for size 4096
> Sep 21 13:12:18 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
> data failed for lba 5451 transfer length 1
> Sep 21 13:12:18 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
> find a free bint for tdisk oraclevm_test1 size 4096
> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
> a free bint for size 4096
> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
> Default free 1571160747520
> Sep 21 13:12:18 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
> alloc block failed for size 4096
> Sep 21 13:12:21 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
> blocks failed for size 4096
> Sep 21 13:12:21 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
> data failed for lba 5451 transfer length 1
> Sep 21 13:12:21 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
>
> I have one pool
> IDVendorModelSerial NumberNameSizeUsedStatus
> 1 AVAGO MR9361-8i 00791e94e425e81723e0702a0eb00506 sda4 8.673 TB 8.315 TB
> D
> 2 AVAGO MR9361-8i 00309653e753e81723e0702a0eb00506 sdb 8.730 TB 8.374 TB
> 3 AVAGO MR9361-8i 00680c69666c0e1823e0702a0eb00506 sdc 8.730 TB 8.375 TB
> 4 AVAGO MR9361-8i 00909892edbce81723e0702a0eb00506 sdd 6.984 TB 6.625 TB
> Pool Disk Statistics
> Total Size: 33.118 TB
> Used Size: 31.689 TB
> VDisk Usage: 31.603 TB
> Deduped Size: 23.58 GB
> Uncompressed Size: 31.580 TB
> Compressed Size: 0.00 KB
> Compression Hits: 0.00 KB
> Dedupe Ratio: 1.001
> And two vdisks:
> oraclevm_test1 Default 6e5bef205312d47d600479ed1d65c618 16384 GB E Modify
>  View
> 
> 
> oraclevm_test2 Default 6ec81be79e5006c77d6f0f67d0019ff4 16384 GB E Modify
>  View
> 
> 
> After restart of the server it works for a while, but then fails again.
>
> PS. What happened with the site?
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this dis

Re: [quadstor-virt] Cannot find a free bint for tdisk oraclevm_test1 size 4096

2019-09-21 Thread QUADStor Support
Site is fine now.

Regarding UNMAP (if ESXi) please refer to

https://kb.vmware.com/s/article/2014849

On Sat, Sep 21, 2019 at 7:24 PM QUADStor Support 
wrote:

> The writes are failing because the available free physical space is low.
> Since these are VMs maybe an unmap will help.
>
> The site is experiencing problems. We will check why
>
> On Sat, Sep 21, 2019 at 7:21 PM Sergey Ganchuk  wrote:
>
>> Hi everybody.
>> My quadstor has ran into this problem:
>> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
>> find a free bint for tdisk oraclevm_test1 size 4096
>> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
>> a free bint for size 4096
>> Sep 21 13:12:09 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
>> Default free 1571160747520
>> Sep 21 13:12:09 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
>> alloc block failed for size 4096
>> Sep 21 13:12:12 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
>> blocks failed for size 4096
>> Sep 21 13:12:12 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
>> data failed for lba 5451 transfer length 1
>> Sep 21 13:12:12 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
>> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
>> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
>> find a free bint for tdisk oraclevm_test1 size 4096
>> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
>> a free bint for size 4096
>> Sep 21 13:12:12 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
>> Default free 1571160747520
>> Sep 21 13:12:12 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
>> alloc block failed for size 4096
>> Sep 21 13:12:15 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
>> blocks failed for size 4096
>> Sep 21 13:12:15 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
>> data failed for lba 5451 transfer length 1
>> Sep 21 13:12:15 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
>> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
>> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
>> find a free bint for tdisk oraclevm_test1 size 4096
>> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
>> a free bint for size 4096
>> Sep 21 13:12:15 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
>> Default free 1571160747520
>> Sep 21 13:12:15 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
>> alloc block failed for size 4096
>> Sep 21 13:12:18 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
>> blocks failed for size 4096
>> Sep 21 13:12:18 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
>> data failed for lba 5451 transfer length 1
>> Sep 21 13:12:18 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
>> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
>> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_for_pgdata:5014 Cannot
>> find a free bint for tdisk oraclevm_test1 size 4096
>> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_block:5122 Cannot find
>> a free bint for size 4096
>> Sep 21 13:12:18 quadstor1 kernel: WARN: bdev_alloc_block:5161 Group
>> Default free 1571160747520
>> Sep 21 13:12:18 quadstor1 kernel: WARN: pgdata_alloc_blocks:5509 bdev
>> alloc block failed for size 4096
>> Sep 21 13:12:21 quadstor1 kernel: WARN: scan_write_data:5719 pgdata alloc
>> blocks failed for size 4096
>> Sep 21 13:12:21 quadstor1 kernel: WARN: __tdisk_cmd_write:6277 scan write
>> data failed for lba 5451 transfer length 1
>> Sep 21 13:12:21 quadstor1 kernel: WARN: __tdisk_cmd_write:6449 write
>> failed for lba 5451 transfer length 1 cw 1 cmd 89 sent ctio 0
>>
>> I have one pool
>> IDVendorModelSerial NumberNameSizeUsedStatus
>> 1 AVAGO MR9361-8i 00791e94e425e81723e0702a0eb00506 sda4 8.673 TB 8.315 TB
>> D
>> 2 AVAGO MR9361-8i 00309653e753e81723e0702a0eb00506 sdb 8.730 TB 8.374 TB
>> 3 AVAGO MR9361-8i 00680c69666c0e1823e0702a0eb00506 sdc 8.730 TB 8.375 TB
>> 4 AVAGO MR9361-8i 00909892edbce81723e0702a0eb00506 sdd 6.984 TB 6.625 TB
>> Pool Disk Statistics
>> Total Size: 33.118 TB
>> Used Size: 31.689 TB
>> VDisk Usage: 31.603 TB
>> Deduped Size: 23.58 GB
>> Uncompressed Size: 31.580 TB
>> Compressed Size: 0.00 KB
>> Compression Hits: 0.00 KB
>> Dedupe Ratio: 1.001
>> And two vdisks:
>> oraclevm_test1 Default 6e5bef205312d47d600479ed1d65c618 16384 GB E Modify
>> <http://quadstor1/cgi-bin/modifytdisk.cgi?target_id=1> View
&g

Re: [quadstor-virt] service quadstor 3.2.17 status ERR do_unit_serial_number 920 Failed result status 2 on debian 9.11

2019-12-25 Thread QUADStor Support
It seems that there are disks which are listed as SCSI disks but not
returning a good response for SCSI INQUIRY unit serial number page. But
this not a problem and the daemon generates a unique serial number in such
cases. The ERR message can be removed and we will do it in the next release

On Wed, Dec 25, 2019 at 10:33 PM victor Gusev  wrote:

> Dear all, I have
>
> Debian 9.11 x64
> And fresh quadstor 3.2.17
>
> When I use command:
>
> service quadstor status
>
> I have see the following messages:
>
> qstor systemd[1]: Starting QUADStor Storage Virtualization...
>> qstor mdaemon[3662]: ERR: do_unit_serial_number:920 Failed result status 2
>> qstor mdaemon[3662]: ERR: do_unit_serial_number:920 Failed result status 2
>> qstor mdaemon[3662]: ERR: do_unit_serial_number:920 Failed result status 2
>> qstor quadstor.init[728]: Starting quadstor:
>> qstor systemd[1]: Started QUADStor Storage Virtualization.
>>
>
> I have line RemoveIPC=no in /etc/systemd/logind.conf  but ERR message
> still here...
>
> quadstor - works fine but the ERR message is worry me...
>
> /quadstor/etc/quadstor.log says me the same:
>
> Tue Dec 24 23:41:05 2019 Err: Failed result status 2
>> Tue Dec 24 23:41:05 2019 Err: Failed result status 2
>> Tue Dec 24 23:41:05 2019 Err: Failed result status 2
>>
>
> Reboot of a node does not help me.
> Please help me with ERR message. Tnanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/fc617014-d23a-41af-8625-e19c6c6132c8%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnmYx0%3DAuhCgJifNp2gX9ijtOe0gPALcDcEv_2VHzXjznw%40mail.gmail.com.


Re: [quadstor-virt] Calculate size of master disk?

2019-12-27 Thread QUADStor Support
Hi,

Assuming you meant the size of the disk which goes as the first disk for
Default pool to hold the deduplication tables, twice the size of the system
RAM is sufficient.

For any other pool there is no restriction in size, but preferably under
100TB. The maximum allowed size for a disk that can be added is 128 TB

On Thu, Dec 26, 2019 at 8:45 PM victor Gusev  wrote:

> Hi all!
>
> Which size of master disk you can recomend (*in general* for 10 TB hdd
> storage data)?
>
> Best regards, Viktor
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/fc0e10ae-8285-46a1-b9b4-15fe3432f12f%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnkTYumJycQSrBP3_Fesbv2V%2BJPDkea4beGdmGNyEXS4XA%40mail.gmail.com.


Re: [quadstor-virt] HA with shared storage

2020-02-05 Thread QUADStor Support
Hi,

The shared storage feature remained unused for a long time and has been
removed for some time now. It needs some work.

On Wed, Feb 5, 2020 at 3:44 PM Fabien Rouach 
wrote:

> Hi,
> Is there any documentation on configuring HA with shared storage?
> I can't find it anywhere.
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/b90434aa-a5eb-418f-9fd1-85f30a791cb3%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxn%3DuRqFwScL%3DkjZC%3D%3DO1BOW47UPjZ2QU1WQd1q2TVLZ4zA%40mail.gmail.com.


Re: [quadstor-virt] adding entry in initiator.allow entry

2020-05-18 Thread QUADStor Support
Hi,

The file is checked on every login to a target. There is no need to restart
the service. Please note that the file name is initiators.allow and not
initiator.allow and it should be under /quadstor/etc/iet/

On Mon, May 18, 2020 at 8:22 PM Daniel Escalona 
wrote:

> I added an entry to this file. How will it take effect without restarting
> the quadstor service? Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/b74389dc-53a1-4d90-8da1-44f24bc07009%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnkmUyn7pyWR69%3DmPG11fZz_uz_d9bv0C%2BQRU9G8nAXLrA%40mail.gmail.com.


Re: [quadstor-virt] Storage virtualization host to pools

2020-12-17 Thread QUADStor Support
Hi,

If you mean access to a virtual disk, you can prevent access using the
fcconfig tool
https://www.quadstor.com/support/136-vdisk-fibre-channel-access-management.html

For iSCSI host access please see "Access control based on Initiator IQN
Name or IP Addresses" in
https://www.quadstor.com/support/121-accessing-virtual-disks.html

On Thu, Dec 17, 2020 at 1:13 AM Will Fink  wrote:

> Is there a feature or soon to be added feature that can limit which hosts
> can access a poo?.
> RIght now it's wide open. I would like to be able to limit or setup what
> disk pools a defined host has access to.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/cdd3b11e-7ddd-4bee-bf5a-6c9c785ad9dcn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnkAaNOGxB55EL8BRqmcYcYLXAxbmj8qE9LbmopgoSGtsw%40mail.gmail.com.


Re: [quadstor-virt] QUADStor VTL Ext 3.0.58 Debian on Ubuntu

2021-12-30 Thread QUADStor Support
Most likely you do not have the libssl1.0 package installed.

On Debian 9/10/11 install the latest libssl1.0 from
https://security.debian.org/debian-security/pool/updates/main/o/openssl/
  For e.g
  wget -c 
http://security.debian.org/debian-security/pool/updates/main/o/openssl/libssl1.0.0_1.0.1t-1+deb8u12_amd64.deb
  apt install ./libssl1.0.0_1.0.1t-1+deb8u12_amd64.deb


If this is not the case, please attach /var/log/syslog

On Thu, Dec 30, 2021 at 1:46 PM Łukasz Stuła  wrote:

> Hello,
>
> I have problem with installation QUADStor software. After run installation
> script QUADStor will not start. It looks like missing config file.
>
> lukasz@vtl:~$ sudo service quadstorvtl start
> Job for quadstorvtl.service failed because the control process exited with
> error code.
> See "systemctl status quadstorvtl.service" and "journalctl -xe" for
> details.
> lukasz@vtl:~$ service quadstorvtl status
> quadstorvtl.service - QUADStor Virtual Tape Library
>  Loaded: loaded (/lib/systemd/system/quadstorvtl.service; enabled;
> vendor preset: enabled)
>  Active: failed (Result: exit-code) since Tue 2021-12-21 20:17:00 UTC;
> 3s ago
> Process: 41586 ExecStart=/quadstorvtl/etc/quadstorvtl.init start
> (code=exited, status=1/FAILURE)
>
> Dec 21 20:16:53 vtl systemd[1]: Starting QUADStor Virtual Tape Library...
> Dec 21 20:16:58 vtl quadstorvtl.init[41586]: Starting quadstorvtl:
> Dec 21 20:16:58 vtl quadstorvtl.init[43816]: cat:
> /quadstorvtl/etc/quadstor.conf: No such file or directory
> Dec 21 20:16:58 vtl quadstorvtl.init[41586]: Cannot start master daemon
> Dec 21 20:17:00 vtl quadstorvtl.init[41586]: Stopping quadstorvtl:
> Dec 21 20:17:00 vtl systemd[1]: quadstorvtl.service: Control process
> exited, code=exited, status=1/FAILURE
> Dec 21 20:17:00 vtl systemd[1]: quadstorvtl.service: Failed with result
> 'exit-code'.
> Dec 21 20:17:00 vtl systemd[1]: Failed to start QUADStor Virtual Tape
> Library.
>
>
> Anyone had simmilar issue and fix it?:)
>
> Best Regards
> Lukasz
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/06eb744e-53e9-44a0-bdc9-29167ef2241fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnmUwqE7CtuX4Na%3DbkxQ3iuNV90yZiCBXRoKGX8fUyw5Og%40mail.gmail.com.


Re: [quadstor-virt] ietd restart without restarting the hole quadstor daemon

2022-01-16 Thread QUADStor Support
While its not possible to restart only iscsi, we will look into the problem
based on the trace and will try to fix this.

On Mon, Jan 17, 2022 at 2:44 AM Thomas Müller  wrote:

> sometimes our iscsi bound esxi-hosts looses the connection to the quadstor
> system, while fc bound esxi-host are still working.
> is there a possibility only to restart the iscsi target daemon and not the
> hole quadstor?
>
> syslog:
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559022] INFO: task
> ietd:4068 blocked for more than 120 seconds.
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559076]   Tainted: G
> OE 4.19.0-17-amd64 #1 Debian 4.19.194-2
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559125] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559177] ietdD
>  0  4068  1 0x
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559181] Call Trace:
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559194]
>  __schedule+0x29f/0x840
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559199]  schedule+0x28/0x80
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559232]  cv_wait+0x98/0x120
> [coredev]
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559240]  ?
> finish_wait+0x80/0x80
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559267]
>  device_free_initiator+0xeb/0x400 [coredev]
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559272]  ?
> wake_up_q+0x70/0x70
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559280]  ?
> __session_del+0x16c/0x1b0 [iscsit]
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559286]  ?
> del_session+0x41/0x70 [iscsit]
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559293]  ?
> ioctl+0x1ae/0x310 [iscsit]
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559299]  ?
> do_vfs_ioctl+0xa4/0x630
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559302]  ?
> ksys_ioctl+0x60/0x90
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559305]  ?
> __x64_sys_ioctl+0x16/0x20
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559311]  ?
> do_syscall_64+0x53/0x110
> Jan 12 12:12:02 SDS-QUADstor01 kernel: [584218.559316]  ?
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/85c666f3-7f04-48ab-8427-4336f2225f38n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnk23f2OpmKotusqpyeiBC7S4fsU3K%3DJN5m7JuKRLVw%3Dww%40mail.gmail.com.


Re: [quadstor-virt] i cant see vault column

2022-07-27 Thread QUADStor Support
Hi,

The VTL group is https://groups.google.com/g/quadstor-vtl

Regarding your question, there isn't an option to vault from the GUI yet.
There is an "Unvault" column which will allow unvaulting cartridges but
this will only appear when there are cartridges to unvault.
To vault a cartridge the application will have to move the cartridge to an
I/E port
The other option to vault would be to use the following command
/quadstorvtl/bin/vcconfig --modify --vtl=
--label= --vault

Other options can be got from the command
/quadstorvtl/bin/vcconfig -h

On Wed, Jul 27, 2022 at 8:33 PM Mustafa KALAYCI 
wrote:

> Hi, i have install quadstor on centos 7. i cant see vault column in
> vCartidge infromation table
> how can i do
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/b2ff77e3-b9f7-45a1-9fe8-3700869719b7n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnnvNXo6FamR4q6W91bJ%2BO9XywhBWGQKbQ_kzX5iUCu7xg%40mail.gmail.com.


Re: [quadstor-virt] pgtable_alloc_virtual:103 Got addr 140312031141888 for size 4194304

2023-01-18 Thread QUADStor Support
Hi,

This is normal. Upto 70% of the memory is initially reserved. It can go
upto 80%.

On Wed, Jan 18, 2023 at 9:33 PM Максим Ткаленко (Unified Cloud System) <
tkalenk...@gmail.com> wrote:

> Hi, when freshly installed on a machine with 48Gb of memory on CentOS 7 At
> the same time, the application takes up 32 Gb of memory. Does anyone know
> if this is normal or what to do to fix the problem.
>
> in log
> Jan 18 17:33:38 ds-1 coredev: main:1544 pgtable init
> Jan 18 17:33:38 ds-1 coredev: pgtable_init:161 Mem 50431660032 reserved 70
> reserved mem 35302162000 count 8416
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312031141888 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312026947584 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312022753280 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312018558976 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312014364672 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312010170368 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312005976064 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140312001781760 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311997587456 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311993393152 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311989198848 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311985004544 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311980810240 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311976615936 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311972421632 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311968227328 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311964033024 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311959838720 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311955644416 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311951450112 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311947255808 for size 4194304
> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
> 140311943061504 for size 4194304
>
> and so on
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/2b4cf1f6-82f9-426b-a027-dd4ba067bb73n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnm7NpvFj%2BdMy7vnyFRvxE6EwcvcFfnEidhAu0tQoKGa0g%40mail.gmail.com.


Re: [quadstor-virt] pgtable_alloc_virtual:103 Got addr 140312031141888 for size 4194304

2023-01-19 Thread QUADStor Support
Hi,

On Thu, Jan 19, 2023 at 5:30 PM Максим Ткаленко (Unified Cloud System) <
tkalenk...@gmail.com> wrote:

> this situation on release 3.2.22
> I have another machine with CentOS 7 (Kernel 3.10.0-1160.81.1.el7.x86_64)
> and 16Gb Ram on it, and release 3.2.22 works for a maximum of half a day,
> then the application hangs and falls off from ESXi hosts.
> but on this machine and release 3.2.19 there is no such problem.
>

Please send the logs from this machine (HTML UI > System Page > Run
Diagnostics) to supp...@quadstor.com


>
> Going back to the machine with 48GB of memory, isn't 32GB eaten up by
> quadstor-virt, while there is only 2Tb of disk space?
> both of my machines have 2TB of disk space dedicated to quadstor-virt
>

16GB would have been allocated initially for deduplication tables and the
memory usage over time would have grown easily to around 24 GB. Now we
allocate more memory upfront which allows us to work with memory better.
The virtualization software is heavily dependent on memory, so any extra
memory is always a help.  It's not dependent on the disk space configured.
Yes if the configured disk is very large and memory is low then there will
be a slow down in performance because we no longer can cached metadata as
much as needed. But in the opposite case it should help with performance.

>
> четверг, 19 января 2023 г. в 01:45:20 UTC+3, quadstor:
>
>> Hi,
>>
>> This is normal. Upto 70% of the memory is initially reserved. It can go
>> upto 80%.
>>
>> On Wed, Jan 18, 2023 at 9:33 PM Максим Ткаленко (Unified Cloud System) <
>> tkale...@gmail.com> wrote:
>>
>>> Hi, when freshly installed on a machine with 48Gb of memory on CentOS 7
>>> At the same time, the application takes up 32 Gb of memory. Does anyone
>>> know if this is normal or what to do to fix the problem.
>>>
>>> in log
>>> Jan 18 17:33:38 ds-1 coredev: main:1544 pgtable init
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_init:161 Mem 50431660032 reserved
>>> 70 reserved mem 35302162000 count 8416
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312031141888 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312026947584 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312022753280 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312018558976 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312014364672 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312010170368 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312005976064 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140312001781760 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311997587456 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311993393152 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311989198848 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311985004544 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311980810240 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311976615936 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311972421632 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311968227328 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311964033024 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311959838720 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311955644416 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311951450112 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311947255808 for size 4194304
>>> Jan 18 17:33:38 ds-1 coredev: pgtable_alloc_virtual:103 Got addr
>>> 140311943061504 for size 4194304
>>>
>>> and so on
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "QUADStor Storage Virtualization" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to quadstor-vir...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/quadstor-virt/2b4cf1f6-82f9-426b-a027-dd4ba067bb73n%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Gro

Re: [quadstor-virt] Starting quadstorvtl (via systemctl): Job for quadstorvtl.service failed because the control process exited with error code. See "systemctl status quadstorvtl.service" and "journal

2023-03-03 Thread QUADStor Support
Please check if you have secure boot enabled. It should be disabled in the
BIOS. If this is not the case please attach /var/log/messages

On Sat, Mar 4, 2023 at 2:14 AM tim-peer wiederkehr <
timpeerwiederkeh...@gmail.com> wrote:

> Hello
>
> I just finish a fresh installation of CentOS 7 and install quadstorvtl
> Step by step by the instructions.
> Installation/Upgrading on RHEL/CentOS, SLES 12 SP2 and Debian 7/8 |
> QUADStor Systems
> 
> But the service never starts. What i am  missing?
>
> [root@localhost ~]# systemctl status quadstorvtl.service
> ● quadstorvtl.service - QUADStor Virtual Tape Library
>Loaded: loaded (/usr/lib/systemd/system/quadstorvtl.service; enabled;
> vendor preset: disabled)
>Active: failed (Result: exit-code) since Fri 2023-03-03 21:06:55 CET;
> 9min ago
>   Process: 25069 ExecStart=/quadstorvtl/etc/quadstorvtl.init start
> (code=exited, status=1/FAILURE)
>
> Mar 03 21:06:53 localhost.localdomain systemd[1]: Starting QUADStor
> Virtual Tape Library...
> Mar 03 21:06:54 localhost.localdomain quadstorvtl.init[25069]: Starting
> quadstorvtl: Failed to inse...le
> Mar 03 21:06:55 localhost.localdomain quadstorvtl.init[25069]: Stopping
> quadstorvtl:
> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service:
> control process exited, cod...s=1
> Mar 03 21:06:55 localhost.localdomain systemd[1]: Failed to start QUADStor
> Virtual Tape Library.
> Mar 03 21:06:55 localhost.localdomain systemd[1]: Unit quadstorvtl.service
> entered failed state.
> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service
> failed.
> Hint: Some lines were ellipsized, use -l to show in full.
>
> [root@localhost ~]# journalctl -xe
> -- Subject: Unit session-c6.scope has finished start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit session-c6.scope has finished starting up.
> --
> -- The start-up result is done.
> Mar 03 21:06:54 localhost.localdomain runuser[25147]:
> pam_unix(runuser-l:session): session opened for us
> Mar 03 21:06:55 localhost.localdomain runuser[25147]:
> pam_unix(runuser-l:session): session closed for us
> Mar 03 21:06:55 localhost.localdomain systemd[1]: Removed slice User Slice
> of vtdbuser.
> -- Subject: Unit user-987.slice has finished shutting down
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit user-987.slice has finished shutting down.
> Mar 03 21:06:55 localhost.localdomain quadstorvtl.init[25069]: Stopping
> quadstorvtl:
> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service:
> control process exited, code=exit
> Mar 03 21:06:55 localhost.localdomain systemd[1]: Failed to start QUADStor
> Virtual Tape Library.
> -- Subject: Unit quadstorvtl.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit quadstorvtl.service has failed.
> --
> -- The result is failed.
> Mar 03 21:06:55 localhost.localdomain systemd[1]: Unit quadstorvtl.service
> entered failed state.
> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service
> failed.
> Mar 03 21:06:55 localhost.localdomain polkitd[836]: Unregistered
> Authentication Agent for unix-process:2
> Mar 03 21:06:55 localhost.localdomain sudo[25046]: pam_unix(sudo:session):
> session closed for user root
> Mar 03 21:08:01 localhost.localdomain systemd[1]: Starting Cleanup of
> Temporary Directories...
> -- Subject: Unit systemd-tmpfiles-clean.service has begun start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit systemd-tmpfiles-clean.service has begun starting up.
> Mar 03 21:08:01 localhost.localdomain systemd[1]: Started Cleanup of
> Temporary Directories.
> -- Subject: Unit systemd-tmpfiles-clean.service has finished start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit systemd-tmpfiles-clean.service has finished starting up.
> --
> -- The start-up result is done.
> Mar 03 21:10:01 localhost.localdomain systemd[1]: Started Session 5 of
> user root.
> -- Subject: Unit session-5.scope has finished start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit session-5.scope has finished starting up.
> --
> -- The start-up result is done.
> Mar 03 21:10:01 localhost.localdomain CROND[25262]: (root) CMD
> (/usr/lib64/sa/sa1 1 1)
>
> Thanks for help.
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/e

Re: [quadstor-virt] Starting quadstorvtl (via systemctl): Job for quadstorvtl.service failed because the control process exited with error code. See "systemctl status quadstorvtl.service" and "journal

2023-03-03 Thread QUADStor Support
Hi,

A reboot is required now after every install. Rebooting should start the
services.

On Sat, Mar 4, 2023 at 2:37 AM tim-peer wiederkehr <
timpeerwiederkeh...@gmail.com> wrote:

> Hello
>
> Thanks for the fast respond. No Secure Boot is disabled. /var/log/messages
> is attached.
>
>
>
> quadstor schrieb am Freitag, 3. März 2023 um 21:47:20 UTC+1:
>
>> Please check if you have secure boot enabled. It should be disabled in
>> the BIOS. If this is not the case please attach /var/log/messages
>>
>> On Sat, Mar 4, 2023 at 2:14 AM tim-peer wiederkehr <
>> timpeerwi...@gmail.com> wrote:
>>
>>> Hello
>>>
>>> I just finish a fresh installation of CentOS 7 and install quadstorvtl
>>> Step by step by the instructions.
>>> Installation/Upgrading on RHEL/CentOS, SLES 12 SP2 and Debian 7/8 |
>>> QUADStor Systems
>>> 
>>> But the service never starts. What i am  missing?
>>>
>>> [root@localhost ~]# systemctl status quadstorvtl.service
>>> ● quadstorvtl.service - QUADStor Virtual Tape Library
>>>Loaded: loaded (/usr/lib/systemd/system/quadstorvtl.service; enabled;
>>> vendor preset: disabled)
>>>Active: failed (Result: exit-code) since Fri 2023-03-03 21:06:55 CET;
>>> 9min ago
>>>   Process: 25069 ExecStart=/quadstorvtl/etc/quadstorvtl.init start
>>> (code=exited, status=1/FAILURE)
>>>
>>> Mar 03 21:06:53 localhost.localdomain systemd[1]: Starting QUADStor
>>> Virtual Tape Library...
>>> Mar 03 21:06:54 localhost.localdomain quadstorvtl.init[25069]: Starting
>>> quadstorvtl: Failed to inse...le
>>> Mar 03 21:06:55 localhost.localdomain quadstorvtl.init[25069]: Stopping
>>> quadstorvtl:
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service:
>>> control process exited, cod...s=1
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: Failed to start
>>> QUADStor Virtual Tape Library.
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: Unit
>>> quadstorvtl.service entered failed state.
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service
>>> failed.
>>> Hint: Some lines were ellipsized, use -l to show in full.
>>>
>>> [root@localhost ~]# journalctl -xe
>>> -- Subject: Unit session-c6.scope has finished start-up
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit session-c6.scope has finished starting up.
>>> --
>>> -- The start-up result is done.
>>> Mar 03 21:06:54 localhost.localdomain runuser[25147]:
>>> pam_unix(runuser-l:session): session opened for us
>>> Mar 03 21:06:55 localhost.localdomain runuser[25147]:
>>> pam_unix(runuser-l:session): session closed for us
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: Removed slice User
>>> Slice of vtdbuser.
>>> -- Subject: Unit user-987.slice has finished shutting down
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit user-987.slice has finished shutting down.
>>> Mar 03 21:06:55 localhost.localdomain quadstorvtl.init[25069]: Stopping
>>> quadstorvtl:
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service:
>>> control process exited, code=exit
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: Failed to start
>>> QUADStor Virtual Tape Library.
>>> -- Subject: Unit quadstorvtl.service has failed
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit quadstorvtl.service has failed.
>>> --
>>> -- The result is failed.
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: Unit
>>> quadstorvtl.service entered failed state.
>>> Mar 03 21:06:55 localhost.localdomain systemd[1]: quadstorvtl.service
>>> failed.
>>> Mar 03 21:06:55 localhost.localdomain polkitd[836]: Unregistered
>>> Authentication Agent for unix-process:2
>>> Mar 03 21:06:55 localhost.localdomain sudo[25046]:
>>> pam_unix(sudo:session): session closed for user root
>>> Mar 03 21:08:01 localhost.localdomain systemd[1]: Starting Cleanup of
>>> Temporary Directories...
>>> -- Subject: Unit systemd-tmpfiles-clean.service has begun start-up
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit systemd-tmpfiles-clean.service has begun starting up.
>>> Mar 03 21:08:01 localhost.localdomain systemd[1]: Started Cleanup of
>>> Temporary Directories.
>>> -- Subject: Unit systemd-tmpfiles-clean.service has finished start-up
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit systemd-tmpfiles-clean.service has finished starting up.
>>> --
>>> -- The start-up result is done.
>>> Mar 03 21:10:01 localhost.localdomain systemd[1]: Started Session 5 of
>>> user root.
>>> -- Subject: Unit session-5.scope has finished start-up
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --

Re: [quadstor-virt] Unable to get configured disk list

2023-03-10 Thread QUADStor Support
Hi,

Please attach /var/log/messages for now

On Fri, Mar 10, 2023 at 11:39 PM 'Phael Ramirez' via QUADStor Storage
Virtualization  wrote:

> My system froze and did a force shutdown.
>
> Ubuntu 16.04.6 LTS
> Quadstor Version 3.2.17
> #systemctl status quadstor
> ● quadstor.service - QUADStor Storage Virtualization
>Loaded: loaded (/lib/systemd/system/quadstor.service; disabled; vendor
> preset: enabled)
>Active: active (running) since Fri 2023-03-10 08:58:11 CST; 44min ago
>   Process: 1592 ExecStart=/quadstor/etc/quadstor.init start (code=exited,
> status=0/SUCCESS)
>  Main PID: 4169 (ietd)
>CGroup: /system.slice/quadstor.service
>└─4169 /quadstor/sbin/ietd
>
> Mar 10 08:57:52 rw-quadstor systemd[1]: Starting QUADStor Storage
> Virtualization...
> Mar 10 08:57:53 rw-quadstor mdaemon[4172]: ERROR: disk_getsize:87 Unable
> to open device /dev/mapper/rw-quadstor-vg-root
> Mar 10 08:57:53 rw-quadstor mdaemon[4172]: ERROR: disk_getsize:87 Unable
> to open device /dev/mapper/rw-quadstor-vg-swap_1
> Mar 10 08:58:11 rw-quadstor quadstor.init[1592]: Starting quadstor:
> Getting VDisk list failed
> Mar 10 08:58:11 rw-quadstor systemd[1]: Started QUADStor Storage
> Virtualization.
> Mar 10 08:58:16 rw-quadstor systemd[1]: Started QUADStor Storage
> Virtualization.
>
> /# mdadm --query --detail /dev/md127
> /dev/md127:
> Version : 1.2
>   Creation Time : Mon Jun 22 08:48:34 2020
>  Raid Level : raid6
>  Array Size : 17575566336 (16761.37 GiB 17997.38 GB)
>   Used Dev Size : 1952840704 (1862.37 GiB 1999.71 GB)
>Raid Devices : 11
>   Total Devices : 11
> Persistence : Superblock is persistent
>
>   Intent Bitmap : Internal
>
> Update Time : Fri Mar 10 07:51:17 2023
>   State : clean
>  Active Devices : 11
> Working Devices : 11
>  Failed Devices : 0
>   Spare Devices : 0
>
>  Layout : left-symmetric
>  Chunk Size : 64K
>
>Name : localhost:rw-esos-sw-raid6
>UUID : b4f3e147:50a5c1fe:a12a4c3f:9cf61f98
>  Events : 243231
>
> Number   Major   Minor   RaidDevice State
>0   8   160  active sync   /dev/sdb
>1   8  1281  active sync   /dev/sdi
>2   8   322  active sync   /dev/sdc
>3   8  1443  active sync   /dev/sdj
>   11   8   804  active sync   /dev/sdf
>5   8  1605  active sync   /dev/sdk
>6   8   646  active sync   /dev/sde
>7   8   967  active sync   /dev/sdg
>8   808  active sync   /dev/sda
>9   8  1129  active sync   /dev/sdh
>   10   8   48   10  active sync   /dev/sdd
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/41712316-754b-4a95-94f7-fc8afd1b7bfan%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnk7cZVqvHzxspZ4nTfrWxhzSrSjqF0SNZeQR-G7CYoyhQ%40mail.gmail.com.


Re: [quadstor-virt] Unable to get configured disk list

2023-03-10 Thread QUADStor Support
It looks like the database is corrupted

Mar 10 12:36:47 rw-quadstor mdaemon[4095]: ERR: __pgsql_exec_query:50
Connect to db failed error is FATAL:  database "qsdb" does not exist
   DETAIL:  The database
subdirectory "pg_tblspc/3223392167/16384" is missing.


Please send a mail to supp...@quadstor.com for recovery steps

On Sat, Mar 11, 2023 at 12:29 AM 'Phael Ramirez' via QUADStor Storage
Virtualization  wrote:

> I don't have /var/log/messages
>
> I've attached /var/log/syslog and journalctl.txt
>
> Best Regards,
>
>
> <>
>
>
> On Fri, Mar 10, 2023 at 12:10 PM QUADStor Support 
> wrote:
>
>> Hi,
>>
>> Please attach /var/log/messages for now
>>
>> On Fri, Mar 10, 2023 at 11:39 PM 'Phael Ramirez' via QUADStor Storage
>> Virtualization  wrote:
>>
>>> My system froze and did a force shutdown.
>>>
>>> Ubuntu 16.04.6 LTS
>>> Quadstor Version 3.2.17
>>> #systemctl status quadstor
>>> ● quadstor.service - QUADStor Storage Virtualization
>>>Loaded: loaded (/lib/systemd/system/quadstor.service; disabled;
>>> vendor preset: enabled)
>>>Active: active (running) since Fri 2023-03-10 08:58:11 CST; 44min ago
>>>   Process: 1592 ExecStart=/quadstor/etc/quadstor.init start
>>> (code=exited, status=0/SUCCESS)
>>>  Main PID: 4169 (ietd)
>>>CGroup: /system.slice/quadstor.service
>>>└─4169 /quadstor/sbin/ietd
>>>
>>> Mar 10 08:57:52 rw-quadstor systemd[1]: Starting QUADStor Storage
>>> Virtualization...
>>> Mar 10 08:57:53 rw-quadstor mdaemon[4172]: ERROR: disk_getsize:87 Unable
>>> to open device /dev/mapper/rw-quadstor-vg-root
>>> Mar 10 08:57:53 rw-quadstor mdaemon[4172]: ERROR: disk_getsize:87 Unable
>>> to open device /dev/mapper/rw-quadstor-vg-swap_1
>>> Mar 10 08:58:11 rw-quadstor quadstor.init[1592]: Starting quadstor:
>>> Getting VDisk list failed
>>> Mar 10 08:58:11 rw-quadstor systemd[1]: Started QUADStor Storage
>>> Virtualization.
>>> Mar 10 08:58:16 rw-quadstor systemd[1]: Started QUADStor Storage
>>> Virtualization.
>>>
>>> /# mdadm --query --detail /dev/md127
>>> /dev/md127:
>>> Version : 1.2
>>>   Creation Time : Mon Jun 22 08:48:34 2020
>>>  Raid Level : raid6
>>>  Array Size : 17575566336 (16761.37 GiB 17997.38 GB)
>>>   Used Dev Size : 1952840704 (1862.37 GiB 1999.71 GB)
>>>Raid Devices : 11
>>>   Total Devices : 11
>>> Persistence : Superblock is persistent
>>>
>>>   Intent Bitmap : Internal
>>>
>>> Update Time : Fri Mar 10 07:51:17 2023
>>>   State : clean
>>>  Active Devices : 11
>>> Working Devices : 11
>>>  Failed Devices : 0
>>>   Spare Devices : 0
>>>
>>>  Layout : left-symmetric
>>>  Chunk Size : 64K
>>>
>>>Name : localhost:rw-esos-sw-raid6
>>>UUID : b4f3e147:50a5c1fe:a12a4c3f:9cf61f98
>>>  Events : 243231
>>>
>>> Number   Major   Minor   RaidDevice State
>>>0   8   160  active sync   /dev/sdb
>>>1   8  1281  active sync   /dev/sdi
>>>2   8   322  active sync   /dev/sdc
>>>3   8  1443  active sync   /dev/sdj
>>>   11   8   804  active sync   /dev/sdf
>>>5   8  1605  active sync   /dev/sdk
>>>6   8   646  active sync   /dev/sde
>>>7   8   967  active sync   /dev/sdg
>>>8   808  active sync   /dev/sda
>>>9   8  1129  active sync   /dev/sdh
>>>   10   8   48   10  active sync   /dev/sdd
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "QUADStor Storage Virtualization" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to quadstor-virt+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/quadstor-virt/41712316-754b-4a95-94f7-fc8afd1b7bfan%40googlegroups.com
>>> <https://groups.google.com/d/msgid/quadstor-virt/41712316-754b-4a95-94f7-fc8afd1b7bfan%40googlegroups.com?utm_medium=email&utm_

Re: [quadstor-virt] install failed

2023-07-24 Thread QUADStor Support
Hi,

We should get this fixed in a day or two.

On Fri, Jul 21, 2023 at 3:48 PM fredo...@gmail.com 
wrote:

> Hello Quadstor !
>
> Thanks a lot for this nice product !
>
> I have an error on install quadstor-virt-3.2.24-rhel.x86_64.rpm on a
> streamos 8
>
> so I launch :
>
>  ./builditfusr
> make -C /lib/modules/4.18.0-500.el8.x86_64/build M=/quadstor/src/export
> clean
> make[1] : on entre dans le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
>   CLEAN   /quadstor/src/export/.tmp_versions
>   CLEAN   /quadstor/src/export/Module.symvers
> make[1] : on quitte le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
> rm -rf *.o *.ko* .*.cmd *.mod.c .tmp_versions .*.o.d Module.* *.unsigned
> modules.* *-safe .cache.mk
> make -C /lib/modules/4.18.0-500.el8.x86_64/build M=/quadstor/src/export
> modules
> make[1] : on entre dans le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
>   CC [M]  /quadstor/src/export/ldev_linux.o
>   CC [M]  /quadstor/src/export/devq.o
>   LD [M]  /quadstor/src/export/ldev.o
>   CC [M]  /quadstor/src/export/core_cluster.o
>   CC [M]  /quadstor/src/export/core_send.o
>   CC [M]  /quadstor/src/export/core_msg.o
>   CC [M]  /quadstor/src/export/core_itf.o
>   CC [M]  /quadstor/src/export/devqlink.o
>   CC [M]  /quadstor/src/export/lz4.o
>   LD [M]  /quadstor/src/export/coredev.o
>   Building modules, stage 2.
>   MODPOST 2 modules
>   CC  /quadstor/src/export/coredev.mod.o
>   LD [M]  /quadstor/src/export/coredev.ko
>   CC  /quadstor/src/export/ldev.mod.o
>   LD [M]  /quadstor/src/export/ldev.ko
> make[1] : on quitte le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
> make -C /lib/modules/4.18.0-500.el8.x86_64/build
> M=/quadstor/src/target-mode/iscsi/kernel clean
> make[1] : on entre dans le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
>   CLEAN   /quadstor/src/target-mode/iscsi/kernel/.tmp_versions
>   CLEAN   /quadstor/src/target-mode/iscsi/kernel/Module.symvers
> make[1] : on quitte le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
> rm -rf *.o *.ko* .*.cmd *.mod.c .tmp_versions .*.o.d Module.* *.unsigned
> modules.* *-safe .cache.mk *.dwo
> make -C /lib/modules/4.18.0-500.el8.x86_64/build
> M=/quadstor/src/target-mode/iscsi/kernel modules
> make[1] : on entre dans le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/tio.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/iscsi.o
> /quadstor/src/target-mode/iscsi/kernel/iscsi.c: Dans la fonction
> « data_out_start »:
> /quadstor/src/target-mode/iscsi/kernel/iscsi.c:943:6: warning: variable
> inutilisée « retval » [-Wunused-variable]
>   int retval;
>   ^~
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/nthread.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/config.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/digest.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/conn.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/session.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/target.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/event.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/param.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/scdefs.o
>   CC [M]  /quadstor/src/target-mode/iscsi/kernel/seq_list.o
>   LD [M]  /quadstor/src/target-mode/iscsi/kernel/iscsit.o
>   Building modules, stage 2.
>   MODPOST 1 modules
>   CC  /quadstor/src/target-mode/iscsi/kernel/iscsit.mod.o
>   LD [M]  /quadstor/src/target-mode/iscsi/kernel/iscsit.ko
> make[1] : on quitte le répertoire
> « /usr/src/kernels/4.18.0-500.el8.x86_64 »
> rm -f *.o ietd ietadm
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o ietd.o ietd.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o iscsid.o iscsid.c
> iscsid.c: Dans la fonction « text_scan_login »:
> iscsid.c:291:8: warning: variable « err » définie mais non utilisée
> [-Wunused-but-set-variable]
> int err;
> ^~~
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o conn.o conn.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o session.o session.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o target.o target.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o message.o message.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o ctldev.o ctldev.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o log.o log.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o chap.o chap.c
> cc -DLINUX -O2 -fno-inline -Wall -Wstrict-prototypes -g -I../include -I.
> -D_GNU_SOURCE-c -o event.o event.c
> cc -DLINUX -O

Re: [quadstor-virt] Slow FC rate performance

2023-08-27 Thread QUADStor Support
Hi,

Please try the following

a. Update to the latest 3.2.27 from
https://www.quadstor.com/storage-virtualization-enterprise-edition-downloads.html
b. Update the firmware on the cards
https://docs.oracle.com/en/networking/storage/host-bus-adapters/universal-host-bus-adapters/install-guide/z40036871045586.html
The current version 6.06.03 does not seem to the latest. The system does
not have a 8GB card but QLogic QLE8362 - 7101674, Sun Storage 16Gb FC PCIe
Universal HBA, maybe you meant to send the diagnostics logs from a
different system ?
c. Try with iSCSI to see if the issue is a FC issue.

For VMs its recommended to set create a pool with Dedupe "Lite" instead of
"Enabled". Due to the ever changing data within a vm, the LIte will give
good performance but at the same time deduplicating easy to find duplicates
such as zeroed data etc.

On Sat, Aug 26, 2023 at 10:38 PM Felix Palmieri 
wrote:

> Hi, I am experimenting with the Storage Virtualization product on Centos
> 6.9 to provide (in experimental mode) a datastore via FC to an ESXi6
> hypervisor.
> The set configuration is as follows.
> • IBM XIV node, 2 x 12 threads, 48 Gb ram, 12 x 2 Tb SAS HDD
> • OS Centos 6.9
> • Qlogic QLE2562 2 x 8Gb FC Controller
> • Quadstor Storage Virtualization (last)
> On the ESXi6 hypervisor side the following configuration
> • IBM x3650 M4, 2 x 16 threads, 112 Gb RAM, 8 x 600 Gb HDD
> • Qlogic QLE2562 2 x 8Gb FC Controller
> On the node where Quadstor is installed, a 5 Tb vDisk configured for
> datastore mode (512 byte Emulation) and Enable Verify is created.
> Everything is connected perfectly, the hypervisor detects (rules through)
> the datastore in the Quadstor SAN, so far everything is fine.
> The issue is the extremely slow transfer rate, when you want to copy a VM
> to or from the Quadstor datastore the rate is 19 / 25 Mg p/sec maximum.
> Could it be the Qlogic drives in Centos? Any configuration detail that I
> didn't take into account?
> For the purposes of analysis I leave the extraction of the dump file in
> System in the product interface.
> I would appreciate any guidance on this matter, thank you very much!
>
> --
> You received this message because you are subscribed to the Google Groups
> "QUADStor Storage Virtualization" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to quadstor-virt+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/quadstor-virt/7526e75b-44ce-4f2f-ab53-f02216c83591n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"QUADStor Storage Virtualization" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to quadstor-virt+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/quadstor-virt/CAOhKxnmSN8_H9SheM%3Dhb8hnv06_pYqCyvgMqUWQuRHO%2BV6qLkQ%40mail.gmail.com.