On 12/01/2018 3:14 AM, Darrell Budic wrote:
It would also add physical resource requirements to future client
deploys, requiring more than 1U for the server (most likely), and I’m
not likely to want to do this if I’m trying to optimize for client
density, especially with the cost of GPUs
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic
wrote:
> Sounds like a good option to look into, but I wouldn’t want it to take
> time & resources away from other, non-GPU based, methods of improving this.
> Mainly because I don’t have discrete GPUs in most of my systems.
This morning I did a rolling update from the latest 3.7.x to 3.12.4,
with no client activity. "Rolling" as in, shut down the Gluster
services on the first server, update, reboot, wait until up and running,
proceed to the next server. I anticipated that a 3.12 server might not
properly talk to a
Hi Nithya
Thanks for helping me with this, I understand now , but I have few questions.
When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added
, it failed.
> [root@gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch
Gluster Users,
Gluster community is deprecating running regression tests for every
commit on NetBSD, and in the future continue only build sanity (and
handling any build breakages) on FreeBSD.
We lack contributors that can help us keep the *BSD infrastructure and
functionality up to date and
Answers inline.
On 12/29/2017 01:10 AM, Omar Kohl wrote:
Hi,
I know that "glusterbot" text about ping-timeout almost by heart by now ;-) I
have searched the complete IRC logs and Mailing list from the last 4 or 5 years for
anything related to ping-timeout.
I have to laugh, because I'm the
Gluster Users,
This is to inform you that from the 4.0 release onward, packages for
CentOS 6 will not be built by the gluster community. This also means
that the CentOS SIG will not receive updates for 4.0 gluster packages.
Gluster release 3.12 and its predecessors will receive CentOS 6 updates
Sounds like a good option to look into, but I wouldn’t want it to take time &
resources away from other, non-GPU based, methods of improving this. Mainly
because I don’t have discrete GPUs in most of my systems. While I could add
them to my main server cluster pretty easily, many of my clients
I like the idea immensely. As long as the gpu usage can be specified for
server-only, client and server, client and server with a client limit of X.
Don't want to take gpu cycles away from machine learning for file IO.
Also must support multiple GPUs and GPU pinning. Really useful for
On Thursday 11 January 2018 12:24 PM, Hans Henrik Happe wrote:
Hi,
I wonder how this procedure works. I could add a bug that I think is a
*blocker*, but there might not be consensus.
You can add it the tracker bug. Depending on the severity we may or may
not take it for 3.12.5
--
Jiffin
10 matches
Mail list logo