Hi All,
I'm glad to announce SCST 3.3 pre-release code freeze in the SCST SVN branch
3.3.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.3.x
It is going to be released after few weeks of testing, if no significant issues
found.
SCST is
Hi All,
I'm glad to announce SCST 3.3 pre-release code freeze in the SCST SVN branch
3.3.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.3.x
It is going to be released after few weeks of testing, if no significant issues
found.
SCST is
Hi All,
I'm glad to announce SCST 3.2 has just been released
You can download it from http://scst.sourceforge.net/downloads.html
SCST is alternative SCSI target stack for Linux. SCST allows creation of
sophisticated
storage devices, which can provide advanced functionality, like replication,
Hi All,
I'm glad to announce SCST 3.2 has just been released
You can download it from http://scst.sourceforge.net/downloads.html
SCST is alternative SCSI target stack for Linux. SCST allows creation of
sophisticated
storage devices, which can provide advanced functionality, like replication,
Hi All,
I'm glad to announce SCST 3.2 pre-release code freeze in the SCST SVN branch
3.2.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.2.x
It is going to be released after few weeks of testing, if no significant issues
found.
SCST is
Hi All,
I'm glad to announce SCST 3.2 pre-release code freeze in the SCST SVN branch
3.2.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.2.x
It is going to be released after few weeks of testing, if no significant issues
found.
SCST is
Hi All,
I'm glad to announce that SCST version 3.1 has just been released and available
for
download from http://scst.sourceforge.net/downloads.html.
Highlights for this release:
- Cluster support for SCSI reservations. This feature is essential for
initiator-side
clustering approaches based
Hi All,
I'm glad to announce that SCST version 3.1 has just been released and available
for
download from http://scst.sourceforge.net/downloads.html.
Highlights for this release:
- Cluster support for SCSI reservations. This feature is essential for
initiator-side
clustering approaches based
Hi,
Bike & Snow wrote on 11/06/2015 10:55 AM:
> Hello Vlad
>
> Excellent news on all the updates.
>
> Regarding this:
> - QLogic target driver has been significantly improved.
>
> Does that mean I should stop building the QLogic target driver from here?
> git://git.qlogic.com/scst-qla2xxx.git
Hi,
Bike & Snow wrote on 11/06/2015 10:55 AM:
> Hello Vlad
>
> Excellent news on all the updates.
>
> Regarding this:
> - QLogic target driver has been significantly improved.
>
> Does that mean I should stop building the QLogic target driver from here?
> git://git.qlogic.com/scst-qla2xxx.git
Hi All,
I'm glad to announce SCST 3.1 pre-release code freeze in the SCST SVN branch
3.0.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.1.x
It is going to be released after few weeks of testing, if no significant issues
found.
Highlights for
Hi All,
I'm glad to announce SCST 3.1 pre-release code freeze in the SCST SVN branch
3.0.x.
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.1.x
It is going to be released after few weeks of testing, if no significant issues
found.
Highlights for
I'm glad to announce that maintenance update for SCST and its drivers 3.0.1 has
just
been released and ready for download from
http://scst.sourceforge.net/downloads.html.
All SCST users are encouraged to update.
SCST is alternative SCSI target stack for Linux. SCST allows creation of
I'm glad to announce that maintenance update for SCST and its drivers 3.0.1 has
just
been released and ready for download from
http://scst.sourceforge.net/downloads.html.
All SCST users are encouraged to update.
SCST is alternative SCSI target stack for Linux. SCST allows creation of
No, because it's too new, but you can always get it from the git. Or you
can use stable Emulex driver for 16Gb connectivity. It's not in the
bundle only because of the Emulex policy.
Thanks,
Vlad
On 9/19/2014 23:59, scst.n...@gmail.com wrote:
Does 16Gb qla2x00t included?
发自我的小米手机
Vladislav
No, because it's too new, but you can always get it from the git. Or you
can use stable Emulex driver for 16Gb connectivity. It's not in the
bundle only because of the Emulex policy.
Thanks,
Vlad
On 9/19/2014 23:59, scst.n...@gmail.com wrote:
Does 16Gb qla2x00t included?
发自我的小米手机
Vladislav
Hi All,
I'm glad to announce that SCST 3.0 has just been released. This release includes SCST
core, target drivers iSCSI-SCST for iSCSI, including iSER support (thanks to
Mellanox!), qla2x00t for QLogic Fibre Channel adapters, ib_srpt for InfiniBand SRP,
fcst for FCoE and scst_local for local
Hi All,
I'm glad to announce that SCST 3.0 has just been released. This release includes SCST
core, target drivers iSCSI-SCST for iSCSI, including iSER support (thanks to
Mellanox!), qla2x00t for QLogic Fibre Channel adapters, ib_srpt for InfiniBand SRP,
fcst for FCoE and scst_local for local
Hi All,
I'm glad to announce SCST 3.0 pre-release code freeze in the SCST SVN branch
3.0.x
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.0.x
It is going to be released after few weeks of testing, if nothing bad found.
SCST is alternative SCSI
Hi All,
I'm glad to announce SCST 3.0 pre-release code freeze in the SCST SVN branch
3.0.x
You can get it by command:
$ svn co https://scst.svn.sourceforge.net/svnroot/scst/branches/3.0.x
It is going to be released after few weeks of testing, if nothing bad found.
SCST is alternative SCSI
I'm glad to announce that SCST iSER target driver is available for testing from
the SCST SVN iser branch. You can download it either by command:
$ svn checkout svn://svn.code.sf.net/p/scst/svn/branches/iser iser-scst-branch
or by clicking on "Download Snapshot" button on
I'm glad to announce that SCST iSER target driver is available for testing from
the SCST SVN iser branch. You can download it either by command:
$ svn checkout svn://svn.code.sf.net/p/scst/svn/branches/iser iser-scst-branch
or by clicking on Download Snapshot button on
Vlad
> boris
>
>> -Original Message-
>> From: Matthew Wilcox [mailto:wi...@linux.intel.com]
>> Sent: Thursday, September 26, 2013 1:56 PM
>> To: Zuckerman, Boris
>> Cc: Vladislav Bolkhovitin; rob.gitt...@linux.intel.com;
>> linux-p...@list
-Original Message-
From: Matthew Wilcox [mailto:wi...@linux.intel.com]
Sent: Thursday, September 26, 2013 1:56 PM
To: Zuckerman, Boris
Cc: Vladislav Bolkhovitin; rob.gitt...@linux.intel.com;
linux-p...@lists.infradead.org;
linux-fsde...@veger.org; linux-kernel@vger.kernel.org
Subject
Hi Rob,
Rob Gittins, on 09/23/2013 03:51 PM wrote:
> On Fri, 2013-09-06 at 22:12 -0700, Vladislav Bolkhovitin wrote:
>> Rob Gittins, on 09/04/2013 02:54 PM wrote:
>>> Non-volatile DIMMs have started to become available. A NVDIMMs is a
>>> DIMM that does not lose data
Hi Rob,
Rob Gittins, on 09/23/2013 03:51 PM wrote:
On Fri, 2013-09-06 at 22:12 -0700, Vladislav Bolkhovitin wrote:
Rob Gittins, on 09/04/2013 02:54 PM wrote:
Non-volatile DIMMs have started to become available. A NVDIMMs is a
DIMM that does not lose data across power interruptions. Some
Rob Gittins, on 09/04/2013 02:54 PM wrote:
> Non-volatile DIMMs have started to become available. A NVDIMMs is a
> DIMM that does not lose data across power interruptions. Some of the
> NVDIMMs act like memory, while others are more like a block device
> on the memory bus. Application uses vary
Rob Gittins, on 09/04/2013 02:54 PM wrote:
Non-volatile DIMMs have started to become available. A NVDIMMs is a
DIMM that does not lose data across power interruptions. Some of the
NVDIMMs act like memory, while others are more like a block device
on the memory bus. Application uses vary
I'm glad to announce that SCST support for 16Gb/s FC and FCoE Emulex CNAs is now
available as part of the Emulex OneCore Storage SDK tool set based on the
Emulex SLI-4
API. Support for 16Gb/s Fibre Channel LPe16000 series and FCoE hardware using
target
mode versions of the OneConnect FCoE CNAs
I'm glad to announce that SCST support for 16Gb/s FC and FCoE Emulex CNAs is now
available as part of the Emulex OneCore Storage SDK tool set based on the
Emulex SLI-4
API. Support for 16Gb/s Fibre Channel LPe16000 series and FCoE hardware using
target
mode versions of the OneConnect FCoE CNAs
Martin K. Petersen, on 05/28/2013 01:25 PM wrote:
> Vladislav> Linux block layer is purely artificial creature slowly
> Vladislav> reinventing wheel creating more problems, than solving.
>
> On the contrary. I do think we solve a whole bunch of problems.
>
>
> Vladislav> It enforces approach,
Martin K. Petersen, on 05/28/2013 01:25 PM wrote:
Vladislav Linux block layer is purely artificial creature slowly
Vladislav reinventing wheel creating more problems, than solving.
On the contrary. I do think we solve a whole bunch of problems.
Vladislav It enforces approach, where often
Martin K. Petersen, on 05/22/2013 09:32 AM wrote:
> Paolo> First of all, I'll note that SG_IO and block-device-specific
> Paolo> ioctls both have their place. My usecase for SG_IO is
> Paolo> virtualization, where I need to pass information from the LUN to
> Paolo> the virtual machine with as
Martin K. Petersen, on 05/22/2013 09:32 AM wrote:
Paolo First of all, I'll note that SG_IO and block-device-specific
Paolo ioctls both have their place. My usecase for SG_IO is
Paolo virtualization, where I need to pass information from the LUN to
Paolo the virtual machine with as much
Hello,
I keep getting on each reboot of my kernel 3.9.1 debug system:
[ 42.037225] [ cut here ]
[ 42.037237] WARNING: at lib/dma-debug.c:937 check_unmap+0x45f/0x8b0()
[ 42.037240] Hardware name: PowerEdge R710
[ 42.037243] ioatdma :00:16.0: DMA-API: device
Hello,
I keep getting on each reboot of my kernel 3.9.1 debug system:
[ 42.037225] [ cut here ]
[ 42.037237] WARNING: at lib/dma-debug.c:937 check_unmap+0x45f/0x8b0()
[ 42.037240] Hardware name: PowerEdge R710
[ 42.037243] ioatdma :00:16.0: DMA-API: device
Andreas Steinmetz, on 01/16/2013 08:19 PM wrote:
Thus, lio (http://www.linux-iscsi.org/) seemed to be the politically and
technically favoured solution.
[...]
The fun part of it was that I finally ended up using SCST - which was
refrained from kernel inclusion for technical reasons beyond
Andreas Steinmetz, on 01/16/2013 08:19 PM wrote:
Thus, lio (http://www.linux-iscsi.org/) seemed to be the politically and
technically favoured solution.
[...]
The fun part of it was that I finally ended up using SCST - which was
refrained from kernel inclusion for technical reasons beyond
SCST version 2.2.1 has just been released. This release includes SCST core, target
drivers iSCSI-SCST (iSCSI), qla2x00t (QLogic Fibre Channel), ib_srpt (InfiniBand
SRP) and scst_local (local loopback-like access) as well as SCST management
utility scstadmin.
SCST allows creation of
SCST version 2.2.1 has just been released. This release includes SCST core, target
drivers iSCSI-SCST (iSCSI), qla2x00t (QLogic Fibre Channel), ib_srpt (InfiniBand
SRP) and scst_local (local loopback-like access) as well as SCST management
utility scstadmin.
SCST allows creation of
Nico Williams, on 11/26/2012 03:05 PM wrote:
Vlad,
You keep saying that programmers don't understand "barriers". You've
provided no evidence of this. Meanwhile memory barriers are generally
well understood, and every programmer I know understands that a
"barrier" is a synchronization
Nico Williams, on 11/26/2012 03:05 PM wrote:
Vlad,
You keep saying that programmers don't understand barriers. You've
provided no evidence of this. Meanwhile memory barriers are generally
well understood, and every programmer I know understands that a
barrier is a synchronization primitive
Vladislav Bolkhovitin, on 11/17/2012 12:02 AM wrote:
The easiest way to implement this fsync would involve three things:
1. Schedule writes for all dirty pages in the fs cache that belong to
the affected file, wait for the device to report success, issue a cache
flush to the device (or request
Vladislav Bolkhovitin, on 11/17/2012 12:02 AM wrote:
The easiest way to implement this fsync would involve three things:
1. Schedule writes for all dirty pages in the fs cache that belong to
the affected file, wait for the device to report success, issue a cache
flush to the device (or request
Chris Friesen, on 11/15/2012 05:35 PM wrote:
The easiest way to implement this fsync would involve three things:
1. Schedule writes for all dirty pages in the fs cache that belong to
the affected file, wait for the device to report success, issue a cache
flush to the device (or request ordering
David Lang, on 11/15/2012 07:07 AM wrote:
There's no such thing as "barrier". It is fully artificial abstraction. After
all, at the bottom of your stack, you will have to translate it either to cache
flush, or commands order enforcement, or both.
When people talk about barriers, they are
杨苏立 Yang Su Li, on 11/15/2012 11:14 AM wrote:
1. fsync actually does two things at the same time: ordering writes (in a
barrier-like manner), and forcing cached writes to disk. This makes it very
difficult to implement fsync efficiently.
Exactly!
However, logically they are two distinctive
David Lang, on 11/15/2012 07:07 AM wrote:
There's no such thing as barrier. It is fully artificial abstraction. After
all, at the bottom of your stack, you will have to translate it either to cache
flush, or commands order enforcement, or both.
When people talk about barriers, they are talking
杨苏立 Yang Su Li, on 11/15/2012 11:14 AM wrote:
1. fsync actually does two things at the same time: ordering writes (in a
barrier-like manner), and forcing cached writes to disk. This makes it very
difficult to implement fsync efficiently.
Exactly!
However, logically they are two distinctive
Chris Friesen, on 11/15/2012 05:35 PM wrote:
The easiest way to implement this fsync would involve three things:
1. Schedule writes for all dirty pages in the fs cache that belong to
the affected file, wait for the device to report success, issue a cache
flush to the device (or request ordering
Nico Williams, on 11/13/2012 02:13 PM wrote:
declaring groups of internally-unordered writes where the groups are
ordered with respect to each other... is practically the same as
barriers.
Which barriers? Barriers meaning cache flush or barriers meaning commands order,
or barriers meaning
Alan Cox, on 11/13/2012 12:40 PM wrote:
Barriers are pretty much universal as you need them for power off !
I'm afraid, no storage (drives, if you like this term more) at the moment
supports
barriers and, as far as I know the storage history, has never supported.
The ATA cache flush is a
Alan Cox, on 11/13/2012 12:40 PM wrote:
Barriers are pretty much universal as you need them for power off !
I'm afraid, no storage (drives, if you like this term more) at the moment
supports
barriers and, as far as I know the storage history, has never supported.
The ATA cache flush is a
Nico Williams, on 11/13/2012 02:13 PM wrote:
declaring groups of internally-unordered writes where the groups are
ordered with respect to each other... is practically the same as
barriers.
Which barriers? Barriers meaning cache flush or barriers meaning commands order,
or barriers meaning
杨苏立 Yang Su Li, on 11/10/2012 11:25 PM wrote:
SATA's Native Command
Queuing (NCQ) is not equivalent; this allows the drive to reorder
requests (in particular read requests) so they can be serviced more
efficiently, but it does *not* allow the OS to specify a partial,
relative ordering of
Richard Hipp, on 11/02/2012 08:24 AM wrote:
SQLite cares. SQLite is an in-process, transaction, zero-configuration
database that is estimated to be used by over 1 million distinct
applications and to be have over 2 billion deployments. SQLite uses
ordinary disk files in ordinary directories,
Alan Cox, on 11/02/2012 08:33 AM wrote:
b) most drives will internally re-order requests anyway
They will but only as permitted by the commands queued, so you have some
control depending upon the interface capabilities.
c) cheap drives won't support barriers
Barriers are pretty
Howard Chu, on 11/01/2012 08:38 PM wrote:
Alan Cox wrote:
How about that recently preliminary infrastructure to send ORDERED commands
instead of queue draining was deleted from the kernel, because "there's no
difference where to drain the queue, on the kernel or the storage side"?
Send
Howard Chu, on 11/01/2012 08:38 PM wrote:
Alan Cox wrote:
How about that recently preliminary infrastructure to send ORDERED commands
instead of queue draining was deleted from the kernel, because there's no
difference where to drain the queue, on the kernel or the storage side?
Send
Alan Cox, on 11/02/2012 08:33 AM wrote:
b) most drives will internally re-order requests anyway
They will but only as permitted by the commands queued, so you have some
control depending upon the interface capabilities.
c) cheap drives won't support barriers
Barriers are pretty
Richard Hipp, on 11/02/2012 08:24 AM wrote:
SQLite cares. SQLite is an in-process, transaction, zero-configuration
database that is estimated to be used by over 1 million distinct
applications and to be have over 2 billion deployments. SQLite uses
ordinary disk files in ordinary directories,
杨苏立 Yang Su Li, on 11/10/2012 11:25 PM wrote:
SATA's Native Command
Queuing (NCQ) is not equivalent; this allows the drive to reorder
requests (in particular read requests) so they can be serviced more
efficiently, but it does *not* allow the OS to specify a partial,
relative ordering of
Alan Cox, on 11/01/2012 05:24 PM wrote:
How about that recently preliminary infrastructure to send ORDERED commands
instead of queue draining was deleted from the kernel, because "there's no
difference where to drain the queue, on the kernel or the storage side"?
Send patches.
OK, then we
Alan Cox, on 10/31/2012 05:54 AM wrote:
I don't want to flame on this topic, but you are not right here. As far as I can
see, a big chunk of Linux storage and file system developers are/were employed
by
the "gold-plated storage" manufacturers, starting from FusionIO, SGI and Oracle.
You know,
Alan Cox, on 10/31/2012 05:54 AM wrote:
I don't want to flame on this topic, but you are not right here. As far as I can
see, a big chunk of Linux storage and file system developers are/were employed
by
the gold-plated storage manufacturers, starting from FusionIO, SGI and Oracle.
You know,
Alan Cox, on 11/01/2012 05:24 PM wrote:
How about that recently preliminary infrastructure to send ORDERED commands
instead of queue draining was deleted from the kernel, because there's no
difference where to drain the queue, on the kernel or the storage side?
Send patches.
OK, then we
Theodore Ts'o, on 10/27/2012 12:44 AM wrote:
On Fri, Oct 26, 2012 at 09:54:53PM -0400, Vladislav Bolkhovitin wrote:
What different in our positions is that you are considering storage
as something you can connect to your desktop, while in my view
storage is something, which stores data
Theodore Ts'o, on 10/27/2012 12:44 AM wrote:
On Fri, Oct 26, 2012 at 09:54:53PM -0400, Vladislav Bolkhovitin wrote:
What different in our positions is that you are considering storage
as something you can connect to your desktop, while in my view
storage is something, which stores data
Theodore Ts'o, on 10/25/2012 09:50 AM wrote:
Yeah I don't buy that. One, flash is still too expensive. Two,
the capital costs to build enough Silicon foundries to replace the
current production volume of HDD's is way too expensive for any
company to afford (the cloud providers are buying
Theodore Ts'o, on 10/25/2012 01:14 AM wrote:
On Tue, Oct 23, 2012 at 03:53:11PM -0400, Vladislav Bolkhovitin wrote:
Yes, SCSI has full support for ordered/simple commands designed
exactly for that task: to have steady flow of commands even in case
when some of them are ordered.
SCSI does
Nico Williams, on 10/24/2012 05:17 PM wrote:
Yes, SCSI has full support for ordered/simple commands designed exactly for
that task: [...]
[...]
But historically for some reason Linux storage developers were stuck with
"barriers" concept, which is obviously not the same as ORDERED commands,
Nico Williams, on 10/24/2012 05:17 PM wrote:
Yes, SCSI has full support for ordered/simple commands designed exactly for
that task: [...]
[...]
But historically for some reason Linux storage developers were stuck with
barriers concept, which is obviously not the same as ORDERED commands,
Theodore Ts'o, on 10/25/2012 01:14 AM wrote:
On Tue, Oct 23, 2012 at 03:53:11PM -0400, Vladislav Bolkhovitin wrote:
Yes, SCSI has full support for ordered/simple commands designed
exactly for that task: to have steady flow of commands even in case
when some of them are ordered.
SCSI does
Theodore Ts'o, on 10/25/2012 09:50 AM wrote:
Yeah I don't buy that. One, flash is still too expensive. Two,
the capital costs to build enough Silicon foundries to replace the
current production volume of HDD's is way too expensive for any
company to afford (the cloud providers are buying
杨苏立 Yang Su Li, on 10/11/2012 12:32 PM wrote:
I am not quite whether I should ask this question here, but in terms
of light weight barrier/fsync, could anyone tell me why the device
driver / OS provide the barrier interface other than some other
abstractions anyway? I am sorry if this sounds
杨苏立 Yang Su Li, on 10/11/2012 12:32 PM wrote:
I am not quite whether I should ask this question here, but in terms
of light weight barrier/fsync, could anyone tell me why the device
driver / OS provide the barrier interface other than some other
abstractions anyway? I am sorry if this sounds
Christoph Hellwig, on 10/01/2012 04:46 AM wrote:
On Sun, Sep 30, 2012 at 05:58:11AM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger
This patch re-adds the ability to optionally run in buffered FILEIO mode
(eg: w/o O_DSYNC) for device backends in order to once again use the
Linux
Christoph Hellwig, on 10/01/2012 04:46 AM wrote:
On Sun, Sep 30, 2012 at 05:58:11AM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellingern...@linux-iscsi.org
This patch re-adds the ability to optionally run in buffered FILEIO mode
(eg: w/o O_DSYNC) for device backends in order to once
Tomasz Chmielewski wrote:
I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files
is a decent number.
Most of the files are hardlinked multiple times, some of them are
hardlinked
Tomasz Chmielewski wrote:
I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files
is a decent number.
Most of the files are hardlinked multiple times, some of them are
hardlinked
Luben Tuikov wrote:
Is there an open iSCSI Target implementation which
does NOT
issue commands to sub-target devices via the SCSI
mid-layer, but
bypasses it completely?
What do you mean? To call directly low level backstorage
SCSI drivers
queuecommand() routine? What are advantages of
Luben Tuikov wrote:
Is there an open iSCSI Target implementation which
does NOT
issue commands to sub-target devices via the SCSI
mid-layer, but
bypasses it completely?
What do you mean? To call directly low level backstorage
SCSI drivers
queuecommand() routine? What are advantages of
Nicholas A. Bellinger wrote:
On Thu, 2008-02-07 at 12:37 -0800, Luben Tuikov wrote:
Is there an open iSCSI Target implementation which does NOT
issue commands to sub-target devices via the SCSI mid-layer, but
bypasses it completely?
Luben
Hi Luben,
I am guessing you mean futher down
Nicholas A. Bellinger wrote:
- It has been discussed which iSCSI target implementation should be in
the mainstream Linux kernel. There is no agreement on this subject
yet. The short-term options are as follows:
1) Do not integrate any new iSCSI target implementation in the
mainstream Linux
[EMAIL PROTECTED] wrote:
On Thu, 7 Feb 2008, Vladislav Bolkhovitin wrote:
Bart Van Assche wrote:
- It has been discussed which iSCSI target implementation should be in
the mainstream Linux kernel. There is no agreement on this subject
yet. The short-term options are as follows:
1) Do
Luben Tuikov wrote:
Is there an open iSCSI Target implementation which does NOT
issue commands to sub-target devices via the SCSI mid-layer, but
bypasses it completely?
What do you mean? To call directly low level backstorage SCSI drivers
queuecommand() routine? What are advantages of it?
[EMAIL PROTECTED] wrote:
On Thu, 7 Feb 2008, Vladislav Bolkhovitin wrote:
Bart Van Assche wrote:
- It has been discussed which iSCSI target implementation should be in
the mainstream Linux kernel. There is no agreement on this subject
yet. The short-term options are as follows:
1) Do
Luben Tuikov wrote:
Is there an open iSCSI Target implementation which does NOT
issue commands to sub-target devices via the SCSI mid-layer, but
bypasses it completely?
What do you mean? To call directly low level backstorage SCSI drivers
queuecommand() routine? What are advantages of it?
Nicholas A. Bellinger wrote:
- It has been discussed which iSCSI target implementation should be in
the mainstream Linux kernel. There is no agreement on this subject
yet. The short-term options are as follows:
1) Do not integrate any new iSCSI target implementation in the
mainstream Linux
Nicholas A. Bellinger wrote:
On Thu, 2008-02-07 at 12:37 -0800, Luben Tuikov wrote:
Is there an open iSCSI Target implementation which does NOT
issue commands to sub-target devices via the SCSI mid-layer, but
bypasses it completely?
Luben
Hi Luben,
I am guessing you mean futher down
Bart Van Assche wrote:
Since the focus of this thread shifted somewhat in the last few
messages, I'll try to summarize what has been discussed so far:
- There was a number of participants who joined this discussion
spontaneously. This suggests that there is considerable interest in
networked
Bart Van Assche wrote:
Since the focus of this thread shifted somewhat in the last few
messages, I'll try to summarize what has been discussed so far:
- There was a number of participants who joined this discussion
spontaneously. This suggests that there is considerable interest in
networked
James Bottomley wrote:
On Tue, 2008-02-05 at 21:59 +0300, Vladislav Bolkhovitin wrote:
Hmm, how can one write to an mmaped page and don't touch it?
I meant from user space ... the writes are done inside the kernel.
Sure, the mmap() approach agreed to be unpractical, but could you
James Bottomley wrote:
On Tue, 2008-02-05 at 21:59 +0300, Vladislav Bolkhovitin wrote:
Hmm, how can one write to an mmaped page and don't touch it?
I meant from user space ... the writes are done inside the kernel.
Sure, the mmap() approach agreed to be unpractical, but could you
Jeff Garzik wrote:
iSCSI is way, way too complicated.
I fully agree. From one side, all that complexity is unavoidable for
case of multiple connections per session, but for the regular case of
one connection per session it must be a lot simpler.
Actually, think about those multiple
Erez Zilber wrote:
Bart Van Assche wrote:
As you probably know there is a trend in enterprise computing towards
networked storage. This is illustrated by the emergence during the
past few years of standards like SRP (SCSI RDMA Protocol), iSCSI
(Internet SCSI) and iSER (iSCSI Extensions for
Jeff Garzik wrote:
Alan Cox wrote:
better. So for example, I personally suspect that ATA-over-ethernet is way
better than some crazy SCSI-over-TCP crap, but I'm biased for simple and
low-level, and against those crazy SCSI people to begin with.
Current ATAoE isn't. It can't support NCQ. A
Linus Torvalds wrote:
I'd assumed the move was primarily because of the difficulty of getting
correct semantics on a shared filesystem
.. not even shared. It was hard to get correct semantics full stop.
Which is a traditional problem. The thing is, the kernel always has some
internal
Linus Torvalds wrote:
So just going by what has happened in the past, I'd assume that iSCSI
would eventually turn into "connecting/authentication in user space" with
"data transfers in kernel space".
This is exactly how iSCSI-SCST (iSCSI target driver for SCST) is
implemented, credits to IET
James Bottomley wrote:
On Mon, 2008-02-04 at 21:38 +0300, Vladislav Bolkhovitin wrote:
James Bottomley wrote:
On Mon, 2008-02-04 at 20:56 +0300, Vladislav Bolkhovitin wrote:
James Bottomley wrote:
On Mon, 2008-02-04 at 20:16 +0300, Vladislav Bolkhovitin wrote:
James Bottomley wrote
1 - 100 of 164 matches
Mail list logo