2013/10/24 Wido den Hollander w...@42on.com:
I have never seen one Intel SSD fail. I've been using them since the X25-M
80GB SSDs and those are still in production without even one wearing out or
failing.
Which kind of SSD are you using, right now, as journal ?
1. You need a minimum of two nodes for the file system + one more node
for an extra monitor otherwise it'll block reads/writes when you restart
one of them (read up on monitor quorum
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/).
2. Not too familiar with openstack
Hi Michael,
Thank you for all your support. I will give it a try.
Regards,
Raghavendra Lad
On Sat, 26 Oct 2013 18:18:48 +0530 wrote
1. You need a minimum of two nodes for
the file system + one more node for an extra monitor otherwise
it'll block reads/writes
Hello,
I have suddenly let 2 OSDs in our small 2 node cluster to be filled.
Reading from the docs, i move 2 pgs dirs to another disk, so that free some
disk space.
Unfortunately after this the osd cannot start.
Please advice! This happened before the 2:2 replication end, so it is
absolutely
Hi,
radosgw-admin object unlink can do stomething like 'blind bucket'
(object in bucket without rgw index)?
--
Regards
Dominik
2013/10/13 Dominik Mostowiec dominikmostow...@gmail.com:
hmm, 'tail' - do you mean file/object content?
I thought that this command might be workaround for 'blind
No. The object unlink option is to delete an object via radosgw-admin.
It has nothing to do with index-less buckets.
On Sat, Oct 26, 2013 at 2:01 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
radosgw-admin object unlink can do stomething like 'blind bucket'
(object in bucket
Hi,
I am wondering if anyone has successfully been able to upload files larger than
5GB using radosgw.
I have tried using various clients, including dragondisk, crossftp, s3cmd,
etc...all of them have failed with a 'permission denied' response.
Each of the clients say they support multi-part
Hi Shain,
Yes we have tested and have working S3 Multipart support for files 5GB
(RHEL64/0.67.4).
However, crossftp unless you have pro it would seem does not support
multipart. Dragondisk gives the error that I have seen when using a PUT
and not multipart, EntityTooLarge. My guess is that it
I'll try the pro version of crossftp as soon as I have a chance.
Here is the output using s3cmd version 1.1.0-beta3:
root@theneykov:/mnt/samba-rbd/Vantage/Incoming/ascvid# s3cmd -v put --debug 2
20130718_ascvid_cheyennemizeTEST4.mov s3://linux
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
Derek,
I also just got a 30 day PRO evaluation license for cross-ftp...even though a
am using the 'pro' version at this point...I am still getting the same
'permission denied' error.
Can you tell me what client you are using with files over 5GB, and if you have
anything special in your in your
After doing a little bit more digging it seems I was getting a 400 level http
response when trying to upload the large file (10 GB).
I was able to get around it by renaming the file (from one with no extension)
to a .txt file.
I had created the file on a mac using the 'mkfile' command for
11 matches
Mail list logo