-users
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne...@mit.edu
--
Live Security Virtual Conference
Exclusive live event will cover all
documentation) There must be this
attribute, maybe its table.name?
Thanks,
Jacob
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne...@mit.edu
#tables.Node._v_name
On Fri, Jul 20, 2012 at 11:08 AM, Jacob Bennett jacob.bennet...@gmail.com
wrote:
Hello PyTables Gurus,
I am trying to look up the name of a particular table when I am iterating
through all of the tables in my file; however, there doesn't seem to be a
name attribute accessible
This is great! Much gratitude must go to you all for your hard work.
Thanks,
Jacob Bennett
On Fri, Jul 20, 2012 at 3:38 PM, Anthony Scopatz scop...@gmail.com wrote:
Excellent! Congrats Everybody! Thanks a ton for all of your hard work.
I am going to send this out to numpy and scipy
to access the data across the datasets (for
aggregating or calculating averages) you can take a MapReduce approach
which should work very well with this approach.
On Tue, Jul 17, 2012 at 11:55 PM, Jacob Bennett
jacob.bennet...@gmail.com wrote:
Thanks for the input Anthony!
-Jake
On Tue
everything in one hdf5 file.
On Wed, Jul 18, 2012 at 12:49 PM, Jacob Bennett
jacob.bennet...@gmail.com wrote:
I really like this way about going about it; however, would it be better
to
use the built in hierarchy for separation of the tables or to write to
separate hdf5 files? When I am
:32 PM, Jacob Bennett
jacob.bennet...@gmail.com wrote:
Sounds awesome, thanks for the help, I also have two more concerns.
#1 - I will never concurrently write, I only have to worry about one
write
with many reads, will the hdf5 metadata for a tree-like structure be
able to
hold up
table of data and then let the in-kernel
search query data only with a specific id?
I hope you can understand my question would 1,000 tables of 100,000 records
each be better for searching than 1 table with 100 million records and one
extra id column?
Thanks,
Jacob Bennett
--
Jacob Bennett
Thanks for the input Anthony!
-Jake
On Tue, Jul 17, 2012 at 4:20 PM, Anthony Scopatz scop...@gmail.com wrote:
On Tue, Jul 17, 2012 at 3:30 PM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Hello PyTables Users Contributors,
Just a quick question, let's say that I have certain
on my writing queue since the data comes in
so fast. ;)
Btw, I will submit the example soon.
-Jacob
On Sat, Jul 14, 2012 at 1:39 PM, Anthony Scopatz scop...@gmail.com wrote:
+1 to example of this!
On Sat, Jul 14, 2012 at 1:36 PM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Awesome
:
On Fri, Jul 13, 2012 at 2:09 PM, Jacob Bennett
jacob.bennet...@gmail.com
wrote:
[snip]
My first implementation was to have a set of current files stay in
write
mode and have an overall lock over these files for the current day, but
(stupidly) I forgot that lock instances
over separate
processes, only threads.
So could you give me any advice in this situation? I'm sure it has come up
before. ;)
Thanks,
Jacob Bennett
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne...@mit.edu
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne...@mit.edu
--
Live Security Virtual Conference
Exclusive live event will cover all
*
that* expensive.
So use openFile() for opening and don't close until the end and this
should be thread safe for reading. Obviously, writing is more difficult.
Be Well
Anthony
On Mon, Jul 2, 2012 at 5:38 AM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Hello PyTables Users,
I am
, what exactly is the number that
is failing?
Be Well
Anthony
On Thu, Jun 28, 2012 at 4:18 PM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Hello PyTables Users,
I have a concern with a very strange error that references that my python
ints cannot be converted to C longs when trying to run
)
tableD.flush()
On Wed, Jun 27, 2012 at 10:39 AM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Sorry about that, I uploaded the code, but since it requires
many dependencies, I was not expecting you to run it. That being said, I
would say the expected number of rows per
discussion on this would be great. The current hierarchy
consists of root leading to around 3000 nodes each of which have around
10 rows.
Thanks,
Jacob
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne
if you have further questions or really want to dive into this
issue.
Be Well
Anthony
On Mon, Jun 25, 2012 at 1:33 PM, Jacob Bennett
jacob.bennet...@gmail.comwrote:
Hello PyTables Users,
I am very new to pytables and if you all could help me out, that would be
splendid.
I'm currently
unexpectedly.
Thanks,
Jacob
--
Jacob Bennett
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
Class of 2014| benne...@mit.edu
BookDataWrapper.py
Description: Binary data
TradeDataWrapper.py
Description: Binary data
19 matches
Mail list logo