> -----Original Message----- > From: [email protected] [mailto:[email protected]] On > Behalf Of aft3rgl0w > Sent: Friday, August 27, 2010 1:58 AM > To: jBASE > Subject: Re: jbase 4 performance (speeds...?) > > Hello, thanks for replying. > > Pawel, trying to answering your questions: > for jbase: > 1) yeah i tried a SELECT FBNK.CATEG.ENTRY which is distributed in 25 part > files. size of each part is roughly 610MB & modulo is 105019 and the > recommended is 110491. i dont think its that badly resized at the moment. All > files are of J4 type.
Well, you are now performing reads over 25 individual files, so your performance will be nothing like dd from a pseudo device. In this case it is not the sizing so much as you won't have that file in memory. Also, is that just a straight SELECT with no criteria, as you imply? Is the select list coming back in the natural order of the part files, and the natural order of the keys in each part file? > > 2)im not sure what you mean by that so i paste the output of jstat -v of 1 part > file: > > jstat -v FBNK.CATEG.ENTRY.NEW.19 > File ../bnk.data/ac/FBNK.CATE038 > Type=J4 , Hash method = 5 Try hashmethod=2 > 3)i dont think we have triggers but ill have to ask a programmer about that. > > > and answering the SAN related questions: > > 1+2) we are using 4x 146GB 15K SAS hds 4Gbit FC on RAID 10.so they are > basically 2 physical disks mirrored on another pair. > these appear as 1 PV (physical volume) through AIX. Through the SAN (these always give problems) or directly connected to the system? > > 3) "how many ranks?"i m not familiar with the ranks term. > 4)i have been looking for the stripe size for some time but i cannot find > anything like that on the storage. Look at the physical volumes and so on using smitty, it will tell you. > you are probably asking about the raid > stripe size of the array. (segment size is 128KB which is also another relevant > parameter but adjusting it to 256KB some time ago didnt really gain a > significant advantage.) jBASE is reading in 4096 byte blocks. Have you tuned AIX and the filesystem? > 5) no the SAN is used by many servers. OK - you are using a SAN -they are almost universally a royal PITA and perform in all sorts of strange ways. > 6) yeah IBM says that there is no problem at all with the > storage.nevertheless they do care about the AIX performance which is ok, so > if jbase acts weird they wont really care about it a lot. > > by the way i forgot to mention that jbase resides on a JFS fs. JFS2 was > performing way slower even on native aix commands so we arent using it at > all. Turn off journaling on your jfs file system - this is doing a lot of work that is of no benefit to you jBASE files. > > > Jim > > yeah i know dd from a pseudo device will give me a theoretical maximum but > i just mentioned so that you know that it can get past the low speeds of 5- > 10MB/sec and that there doesnt seem to be anything wrong with the > storage itself in the first place. Except that it is a SAN ;-) i understand that raw dd is nowhere close to > the way a db is performing.theres a lot of logical processing as you also > pointed out. i have read some older posts where you mention about jrf > and/or tar compress and extracting the whole environment from the > beginning.not sure if thats possible to do at this time but i will definitely run > an aix defragmentation. Give it a go as it can't harm, but I doubt you will see much difference. Also, try your select on just one of the part files/ > i know SSD now run fine on desktops with some amazing write speeds like > your Vortex. i havent really seen any on servers or storage etc so i dont really > have an opinion on that.I think it would be a bit weird to propose an SSD > solution now since we actually got an expansion of our storage a couple of > months ago and we are using the storage for nearly everything. Maybe, and I see your point, but for a few thousand bucks, you won't look back and the SSD reliability is now at least as good as hard drives and probably better. plus i think it > would probably be a bit expensive but thats just a guess i havent looked up > to it. About $600 per disk for SATA-II with read write throughput around 380MBs continuous. Stripe/raid 4 of those guys together and you are laughing. > However Jim i would like to hear a few numbers regarding the speed even > just for the record. For example if 10 people were saying that they have > hardware similar to us and they were getting like 60-70MB on many things > they run on jbase then ok i would wonder more on whats going wrong with > me. I suspect that the combination of your tests is just exposing that your SAN is either misconfigured or just not very good. I have yet to meet a San system that I didn't hate. The only people that get them to work nicely, seem to employ someone full time to tune it and rebalance it and things. It is like the billionaire with two Lamborghinis because after driving one of them for a day it needs a day's worth of tune up. > But if speeds of like 10Mb/sec were mentioned i would worry a bit less :). That is slow, and I can get about 70MBs on my 4x SATA II physical hard drives on my Linux box (that's not the SSD drives) > i just want have a view about hows other people running in general cause i > havent got really anybody else to compare and talk about it. Sure - this is the place for that. I think you need to find a way to do the same test on a local array (you probably have some local disks on your system - see if you can hijack enough space to do the same test on a local system. My guess i that without doing anything, you will find it is much faster and that the SAN is just good at dd ;-) Get some results on the same machine without the Sam involved. I bet that it shows you that the SAN needs configuring for jBASE (without fail, that has been the case at every [well, both -SANS were not around in MDIS days ;)] database company I worked at. Also, remove the distributed file overhead and go for a straight SELECT on a single file. Start at say 100MB and go up in increasing sizes of 100MB. I also bet that at some point the SAN array drops off like a stone. Jim -- Please read the posting guidelines at: http://groups.google.com/group/jBASE/web/Posting%20Guidelines IMPORTANT: Type T24: at the start of the subject line for questions specific to Globus/T24 To post, send email to [email protected] To unsubscribe, send email to [email protected] For more options, visit this group at http://groups.google.com/group/jBASE?hl=en
