Dates Schedule Problems

2005-08-14 Thread C.F. Scheidecker Antunes

Hello all,

I have a little table o TAs (teach assistants) with the following MySQL 
schema:


tbl_NotAvailable {
   TAid - int(11)
   StartDate  - DateTime
   EndDate - DataTime
}

This table logs the times where TAs are NOT available. So If a TA is not 
available on Monday from 12:00 to Tuesday 13:30 there will
be an entry on his table like (001,2005-08-15 12:00,2005-8-16 13:30) 
where 001 is the TA Id.


Question one:
Now, how can I return a calendar of the dates where the TA is AVAILABLE, 
that is the oposite of what is recorded?
I want a list of all the available days and times by substracting the 
non available times recorded in the table.
I guess I would need to produce a SELECT statement of all the days from 
Start to End and exclude those days that

are recorded on the table
What I want is given an interval say 2005-08-01 00:00:00 to 2005-08-31 
23:59:00, how can I get a list of all days where there is

no activity based on the records when the TA is not available?

Question two:
I want to make sure a you can book a time on the table that does not 
conflict with an existent one. How can I do it?


I've been browsing Joe Celko's book for ideas.

Any thoughts on how to accomplish this?

Thanks in advance,

C.F.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dates Schedule Problems

2005-08-14 Thread Jasper Bryant-Greene

C.F. Scheidecker Antunes wrote:

Question one:
Now, how can I return a calendar of the dates where the TA is AVAILABLE, 
that is the oposite of what is recorded?
I want a list of all the available days and times by substracting the 
non available times recorded in the table.
I guess I would need to produce a SELECT statement of all the days from 
Start to End and exclude those days that

are recorded on the table
What I want is given an interval say 2005-08-01 00:00:00 to 2005-08-31 
23:59:00, how can I get a list of all days where there is

no activity based on the records when the TA is not available?


I would use whatever server language you're using (e.g. PHP) to create a 
list of all dates between those dates, and then use something like WHERE 
date_field NOT IN (list,of,dates,between,those,dates).


Jasper

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Design problem About application related with cached rows

2005-08-14 Thread Kostas Karadamoglou

Hello,

I try to create an application for my dissertation that caches rows from 
an romote database. The application handles a cache database which is 
identical with the original. The only difference is that it does not 
have autogenerated fields and referential integrity is omitted.


I have designed a caching algorithm specific to the context of my 
applocation. However, I have a problem that I cannot solve it:


I want to send a query to the remote database and then store the result 
to the cache instance. The cache database might have rows that can be 
duplicate with some rows of the resultset retrieved from the query.


The easy solution is to insert all the rows of the resultset one by one 
after I check their existence at the cache table. However, this solution 
impose network latency to the network because useless data is moved on 
the net.


Do you know any efficient way to fetch the exception (the rows that dont 
exist at the cache instance) of rows from the remote database using sql 
queries?


I tried to use the following kind of query but the database returns an 
overflow message if the query string is too long.


SELECT * FROM Customers WHERE CustomerID NOT IN (01,02, 03, ...);


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design problem About application related with cached rows

2005-08-14 Thread Pooly
Hi,

Try in in two fold.
1. Get all the ID you have in your remote DB
2. check those localy with the ones you have in cache
3. Get all the info you need after you remove the ID you already have.

2005/8/14, Kostas Karadamoglou [EMAIL PROTECTED]:
 Hello,
 
 I try to create an application for my dissertation that caches rows from
 an romote database. The application handles a cache database which is
 identical with the original. The only difference is that it does not
 have autogenerated fields and referential integrity is omitted.
 
 I have designed a caching algorithm specific to the context of my
 applocation. However, I have a problem that I cannot solve it:
 
 I want to send a query to the remote database and then store the result
 to the cache instance. The cache database might have rows that can be
 duplicate with some rows of the resultset retrieved from the query.
 
 The easy solution is to insert all the rows of the resultset one by one
 after I check their existence at the cache table. However, this solution
 impose network latency to the network because useless data is moved on
 the net.
 
 Do you know any efficient way to fetch the exception (the rows that dont
 exist at the cache instance) of rows from the remote database using sql
 queries?
 
 I tried to use the following kind of query but the database returns an
 overflow message if the query string is too long.
 
 SELECT * FROM Customers WHERE CustomerID NOT IN (01,02, 03, ...);
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
Webzine Rock : http://www.w-fenec.org/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design problem About application related with cached rows

2005-08-14 Thread Kostas Karadamoglou
I thought this solution but it needs an extra software at the remote 
database. I would prefer to contact from the cache database directly to 
the remote database using SQL.


But even using those 3 steps there is a problem. The only interface that 
I have is JDBC/SQL. That means that the only solution would be to fetch 
all the rows in a resultset locally(remote database) and then in a for 
loop I must check all the IDs that I dont need with the ID of the 
current row within the loop.


I think this is a little bit time consuming!!

Do you know I more effective way to do this?

thank you in advance, Kostas

Pooly wrote:

Hi,

Try in in two fold.
1. Get all the ID you have in your remote DB
2. check those localy with the ones you have in cache
3. Get all the info you need after you remove the ID you already have.

2005/8/14, Kostas Karadamoglou [EMAIL PROTECTED]:


Hello,

I try to create an application for my dissertation that caches rows from
an romote database. The application handles a cache database which is
identical with the original. The only difference is that it does not
have autogenerated fields and referential integrity is omitted.

I have designed a caching algorithm specific to the context of my
applocation. However, I have a problem that I cannot solve it:

I want to send a query to the remote database and then store the result
to the cache instance. The cache database might have rows that can be
duplicate with some rows of the resultset retrieved from the query.

The easy solution is to insert all the rows of the resultset one by one
after I check their existence at the cache table. However, this solution
impose network latency to the network because useless data is moved on
the net.

Do you know any efficient way to fetch the exception (the rows that dont
exist at the cache instance) of rows from the remote database using sql
queries?

I tried to use the following kind of query but the database returns an
overflow message if the query string is too long.

SELECT * FROM Customers WHERE CustomerID NOT IN (01,02, 03, ...);


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dates Schedule Problems

2005-08-14 Thread Peter Brawley

CF,

tbl_NotAvailable {
   TAid - int(11)
   StartDate  - DateTime
   EndDate - DataTime
}
snip
Question one:
Now, how can I return a calendar of the dates where the TA is AVAILABLE,
that is the oposite of what is recorded?

You may be better off with an explicit calandar table containing slots 
for all possible timeslots. Absent that, you may be able to get your 
list from a query like the following, though you will probably have to 
adjust it for your context...


SELECT
 TAid,
 startdate AS beginavailable,
 enddate AS endavailable
FROM tbl_notavailable
WHERE TAid  1 AND NOT EXISTS (
 SELECT enddate
 FROM tbl_notavailable
 WHERE startdate  beginavailable
   AND enddate = endavailable
);

PB



C.F. Scheidecker Antunes wrote:


Hello all,

I have a little table o TAs (teach assistants) with the following 
MySQL schema:


tbl_NotAvailable {
   TAid - int(11)
   StartDate  - DateTime
   EndDate - DataTime
}

This table logs the times where TAs are NOT available. So If a TA is 
not available on Monday from 12:00 to Tuesday 13:30 there will
be an entry on his table like (001,2005-08-15 12:00,2005-8-16 13:30) 
where 001 is the TA Id.


Question one:
Now, how can I return a calendar of the dates where the TA is 
AVAILABLE, that is the oposite of what is recorded?
I want a list of all the available days and times by substracting the 
non available times recorded in the table.
I guess I would need to produce a SELECT statement of all the days 
from Start to End and exclude those days that

are recorded on the table
What I want is given an interval say 2005-08-01 00:00:00 to 2005-08-31 
23:59:00, how can I get a list of all days where there is

no activity based on the records when the TA is not available?

Question two:
I want to make sure a you can book a time on the table that does not 
conflict with an existent one. How can I do it?


I've been browsing Joe Celko's book for ideas.

Any thoughts on how to accomplish this?

Thanks in advance,

C.F.




--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.8/71 - Release Date: 8/12/2005


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design problem About application related with cached rows

2005-08-14 Thread Pooly
Always reply to the lists, others may have better ideas, others insights...
I may have not understand everything in your previous description. Why
would you need an extra software on the remote DB ??
All the steps I've describe should work from the client.

All you want is a kind of cache for the client, that you can
invalidate when you judge it needed ?

If you need to check UPDATE on your remote DB, you're stuck, except
with 5.0.10 (at least) with triggers. If it's only INSERT, then you
just have to query the last rows you don't have (with a timestamp, an
ID).
If you don't have any total order relationship in your data,but have a
unique ID, retrieve all the ID from your remote DB, remote with the
one you have locally, and then query the full rows. No extra software
needed on the server !

2005/8/14, Kostas Karadamoglou [EMAIL PROTECTED]:
 I thought this solution but it needs an extra software at the remote
 database. I would prefer to contact from the cache database directly to
 the remote database using SQL.
 
 But even using those 3 steps there is a problem. The only interface that
 I have is JDBC/SQL. That means that the only solution would be to fetch
 all the rows in a resultset locally(remote database) and then in a for
 loop I must check all the IDs that I dont need with the ID of the
 current row within the loop.
 
 I think this is a little bit time consuming!!
 
 Do you know I more effective way to do this?
 
 thank you in advance, Kostas
 
 Pooly wrote:
  Hi,
 
  Try in in two fold.
  1. Get all the ID you have in your remote DB
  2. check those localy with the ones you have in cache
  3. Get all the info you need after you remove the ID you already have.
 
  2005/8/14, Kostas Karadamoglou [EMAIL PROTECTED]:
 
 Hello,
 
 I try to create an application for my dissertation that caches rows from
 an romote database. The application handles a cache database which is
 identical with the original. The only difference is that it does not
 have autogenerated fields and referential integrity is omitted.
 
 I have designed a caching algorithm specific to the context of my
 applocation. However, I have a problem that I cannot solve it:
 
 I want to send a query to the remote database and then store the result
 to the cache instance. The cache database might have rows that can be
 duplicate with some rows of the resultset retrieved from the query.
 
 The easy solution is to insert all the rows of the resultset one by one
 after I check their existence at the cache table. However, this solution
 impose network latency to the network because useless data is moved on
 the net.
 
 Do you know any efficient way to fetch the exception (the rows that dont
 exist at the cache instance) of rows from the remote database using sql
 queries?
 
 I tried to use the following kind of query but the database returns an
 overflow message if the query string is too long.
 
 SELECT * FROM Customers WHERE CustomerID NOT IN (01,02, 03, ...);
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 


-- 
Webzine Rock : http://www.w-fenec.org/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Linux vs. Windows?

2005-08-14 Thread Gleb Paharenko
Brian Dunning [EMAIL PROTECTED] wrote:



Have a look here, but use that OS which you know better:

  http://dev.mysql.com/doc/mysql/en/windows-vs-unix.html





 Same machine, any performance difference?

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor
I just revived a database that was in a version 3.23 server and moved it 
to a 4.1   There are big fields of TEXT based data.  They have a way of 
compressing the amount of TEXT data by identifying common subchunks and 
putting them in a subchunk table and replacing them with a marker 
inside the main text that will pull in that subchunk whenever the parent 
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of 
duplicated chunks from one record to another.  The client does not want 
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it 
compressed, and oh well I look on it as an opportunity for some 
housecleaning.  Now that we have 4.1 what is the best practice for 
automated looking for common subchunks, factoring them out, and then 
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The 
rest is easy.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Disasterous database corruption

2005-08-14 Thread Sergei Golubchik
Hi!

On Aug 09, Daniel Kasak wrote:
 Hi all.
 
 I've been testing out mysql-5.0.10 on my Powerbook ( Gentoo Linux PPC ), 
 and I've hit an incredibly unfortunate bug. It is demonstrated 
 adequately with the following:
...
 Is anyone interested in examining what went wrong? ie should I create a 
 bug report? I suppose I'd have to upload a zipped copy of my 
 /var/lib/mysql folder or something, since mysql isn't too keen on giving 
 up any data voluntarily.

Yes, please submit a bugreport.
And try to explain there how we could repeat this bug.

Regards,
Sergei

-- 
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Limit in subquery

2005-08-14 Thread Pooly
Hi,

I use ther version 4.1.11, and when I run this query :

SELECT c.id,c.subject,c.res_type,c.news_id,c.com_thread,c.timestamp +
0 as timestamp FROM comments c WHERE c.id IN (SELECT d.id FROM
comments d WHERE d.res_type=1 GROUP BY d.news_id ORDER BY d.id DESC
LIMIT 5 ) ORDER BY c.timestamp DESC;
ERROR 1235 (42000): This version of MySQL doesn't yet support 'LIMIT 
IN/ALL/ANY/SOME subquery'

Does anyone know if this limit will be lifted and when ?

Or does anyone know how to write this query in another way ?
I try to pull the last 5 comments that match res_type=1 but with
only one by news_id and only the last 5 comments.
(I guess I could use a temp table, feed with the 5 matching id, then
join it with the comments table)

-- 
Webzine Rock : http://www.w-fenec.org/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor

Thanks for your answer.  It would certainly work provided having
enough disk space to do that.  I thought something like
that but was hoping I can leverage fulltext  and just
record the fulltext result between a each record
and each other record. Then I can group all records that
highly correlate and maybe do a much smaller scale version of
the brute force indexing thing that you are proposing, i.e. only
do it on a group of records that we already know  have a high
correlation, ie a high probability of sharing a chunk in common
  Then when done I can throw away that data
and do another group.  What do you think?   Processing cycles I have
but easy disk space I don't.

Alexey Polyakov wrote:

There's no easy way to do it I think. But if spending a few hours
(days?) programming is ok with you I'd suggest something like this:
1) create a table (let's call it hashes) with three columns: hash,
doc_id, pos_id (doc_id is an identifier for records from table with
big text chunks)
2) retrieve a record from big table. Calculate hash value for
concatenated first 20 words from text. Insert this value into
hash/doc_id table, and 1 as value of pos_id. Calculate hash for
concatenated 20 words starting from 2-nd word of this text, and also
insert it into hash/doc_id table (2 as value of pos_id). Repeat until
you reach the end of this text.
3) Repeat 2) for all records of big table
4) Now you have all data needed for identifying those duplicate chunks.
select count(doc_id) as c from hashes group by hash where c1;
will return all hashes for 20-word chunks that are found in 2 or more documents
select doc_id from hashes where hash=some_value; 
will return documents that contain this chunk.

select h1.pos_id, h2.pos_id from hashes h1, hashes h2 where
h1.doc_id=doc1 and h2.doc_id=doc2 and h1.hash=h2.hash order by
h1.pos_id;
will return word positions for duplicate text in two documents.
For example last query returns:
156 587
157 588
...
193 624
It means that you can take words 156-213 from doc1, insert it into
subchunks table, and replace words 156-212 at doc1 and words 587-643
at doc2 with a marker.


Yeah it looks ugly, and will take a lot of space for temporary data.
But in the end you'll have all 20+ words duplicate chunks properly
identified.

On 8/14/05, Gerald Taylor [EMAIL PROTECTED] wrote:


I just revived a database that was in a version 3.23 server and moved it
to a 4.1   There are big fields of TEXT based data.  They have a way of
compressing the amount of TEXT data by identifying common subchunks and
putting them in a subchunk table and replacing them with a marker
inside the main text that will pull in that subchunk whenever the parent
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of
duplicated chunks from one record to another.  The client does not want
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it
compressed, and oh well I look on it as an opportunity for some
housecleaning.  Now that we have 4.1 what is the best practice for
automated looking for common subchunks, factoring them out, and then
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The
rest is easy.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



cannot drop database

2005-08-14 Thread Gary Huntress
I need to drop a database named  ÃáãÃáà using the mysql client.  I'm 
getting you have an error in your sql syntax for the command


DROP database ÃáãÃáÃ;

I'm sure this is a character set issue.  How can I drop this database?


Regards,

Gary H.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practices for finding duplicate chunks

2005-08-14 Thread Alexey Polyakov
You can modify the algorithm I proposed to find groups of records that
are likely to have duplicate chunks. Simply record only a part of
hashes, something like: if md5(concat(word1,word2,...,word20))%32=0.
Disk usage for this table will be maybe 60 bytes per record, if your
average word is 8 bytes (counting whitespace), then disk space you'll
need is about 25% of data size.
After groups of record are found, you can do brute-force indexing to
find duplicate chunks.

On 8/15/05, Gerald Taylor [EMAIL PROTECTED] wrote:
 Thanks for your answer.  It would certainly work provided having
 enough disk space to do that.  I thought something like
 that but was hoping I can leverage fulltext  and just
 record the fulltext result between a each record
 and each other record. Then I can group all records that
 highly correlate and maybe do a much smaller scale version of
 the brute force indexing thing that you are proposing, i.e. only
 do it on a group of records that we already know  have a high
 correlation, ie a high probability of sharing a chunk in common
   Then when done I can throw away that data
 and do another group.  What do you think?   Processing cycles I have
 but easy disk space I don't.

-- 
Alexey Polyakov

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A little help with mysql's Query Cache configuration

2005-08-14 Thread Maxim Vexler
Mysql's query cache has two option's [query_cache_limit] and [query_cache_size].
The documentation is not clear (for me) on their purpose and correct
usage tactics.

What does the [query_cache_size] sets, the maximum memory space a
single query can utilize ? Is it wise to set it to the same size as
[query_cache_limit] ?

How much memory would you recommend setting the caching to, if I have
1GB on the system, and it runs httpd/php  mysql all together ?


I've also included a dump from our live server, for which I have
configured the query cache. for reference (or criticism) :

mysql SHOW STATUS LIKE 'Qcache%';
+-+-+
| Variable_name   | Value   |
+-+-+
| Qcache_queries_in_cache | 1007|
| Qcache_inserts  | 98199   |
| Qcache_hits | 97128   |
| Qcache_lowmem_prunes| 49330   |
| Qcache_not_cached   | 671 |
| Qcache_free_memory  | 1063112 |
| Qcache_free_blocks  | 340 |
| Qcache_total_blocks | 2374|
+-+-+
8 rows in set (0.00 sec)

mysql SHOW global VARIABLES like 'query_cache%';
+---+-+
| Variable_name | Value   |
+---+-+
| query_cache_limit | 5242880 |
| query_cache_size  | 3145728 |
| query_cache_type  | ON  |
+---+-+
3 rows in set (0.00 sec)


Thank you.

-- 
Cheers, 
Maxim Vexler (hq4ever).

Do u GNU ?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A little help with mysql's Query Cache configuration

2005-08-14 Thread Alexey Polyakov
query_cache_size - a total size of memory that server utilizes for
query caching.
query_cache_limit - maximum size of result set that can be cached.
I'd suggest leaving query_cache_limit at 1 mb (the default value), and
tune query_cache_size until you have reasonably low
qcache_lowmem_prunes value.



On 8/15/05, Maxim Vexler [EMAIL PROTECTED] wrote:
 Mysql's query cache has two option's [query_cache_limit] and 
 [query_cache_size].
 The documentation is not clear (for me) on their purpose and correct
 usage tactics.
 
 What does the [query_cache_size] sets, the maximum memory space a
 single query can utilize ? Is it wise to set it to the same size as
 [query_cache_limit] ?
 
 How much memory would you recommend setting the caching to, if I have
 1GB on the system, and it runs httpd/php  mysql all together ?
 
 
 I've also included a dump from our live server, for which I have
 configured the query cache. for reference (or criticism) :
 
 mysql SHOW STATUS LIKE 'Qcache%';
 +-+-+
 | Variable_name   | Value   |
 +-+-+
 | Qcache_queries_in_cache | 1007|
 | Qcache_inserts  | 98199   |
 | Qcache_hits | 97128   |
 | Qcache_lowmem_prunes| 49330   |
 | Qcache_not_cached   | 671 |
 | Qcache_free_memory  | 1063112 |
 | Qcache_free_blocks  | 340 |
 | Qcache_total_blocks | 2374|
 +-+-+
 8 rows in set (0.00 sec)
 
 mysql SHOW global VARIABLES like 'query_cache%';
 +---+-+
 | Variable_name | Value   |
 +---+-+
 | query_cache_limit | 5242880 |
 | query_cache_size  | 3145728 |
 | query_cache_type  | ON  |
 +---+-+
 3 rows in set (0.00 sec)
 
 
 Thank you.
 
 --
 Cheers,
 Maxim Vexler (hq4ever).
 
 Do u GNU ?
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
Alexey Polyakov

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question on Join

2005-08-14 Thread Manoj
Dear All,
I am trying to join two tables say a  b. Both tables have a set
of dates. I want to join the tables in such a fashion that I retrieve
all dates from table A. If table b has corresponding data (for that
date) then it will display it or else will display null. I am pretty
sure that I can solve this using left outer join but am not able to
get it working hence any help would be appreciated!

Cheers

Manoj

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Post-Installation Setup Problems: error[2002 1604]

2005-08-14 Thread Tim Johnson
Hello:
OS: Linux Slackware 10.0
Ver: 4.0.20
Network Status: Closed to outside world

After installation:
I'm getting the following problems:

Login Problem:

mysql --host=localhost --user=root --password=**
ERROR 2002: Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (13)

What works:
mysql -u tim -p
Enter password: *   # works
---
Problems unencrypting password:
SET PASSWORD FOR 'tim'@'localhost' = OLD_PASSWORD('**')
ERROR 1064: You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use
near
'OLD_PASSWORD('marion')' at line 1

## Note: am unclear what is wrong with syntax
## Host table dump below:
SELECT Host, User FROM mysql.user;
 +---+--+
 | Host  | User |
 +---+--+
 | linus |  |
 | linus | root |
 | linus | tim  |
 | linus.johnson.com | tim  |
 | localhost |  |
 | localhost | root |
 | localhost | tim  |
 +---+--+

Any advice is appreciated:
Am working from the installed docs.

Thanks
tj

-- 
Tim Johnson [EMAIL PROTECTED]
  http://www.alaska-internet-solutions.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]