Big Thanks Marc for helping me here. 

Question: how fast is "fast enough" here, and how big are your payloads?
Answer: < 50 ms for read and deserialization for 20K keys. Payload varies 
depending on SubObj array length but in avg case SubObj array length does 
not exceed 100 elements.

I am splitting 20K keys into batches of 5000 and trying to fetch MyObject 
instances from redis using StackExchange.Redis MGET. I am expecting fetch 
from redis+deserialization to be within few ms but what I am currently 
getting is 200-300ms and deserialization is taking more than half of it so 
I am trying to troubleshoot both deserialization and redis MGET timings 
seperately.

Below is my code for read and deserialization and results for running this 
test on lab VM vsmy desktop.

Results on 2 different machines:
VM with 16 cores
Time to get 2165 keys from redis:109
Time to desrialize 2165 keys from redis:92
Time to get 5000 keys from redis:102
Time to desrialize 5000 keys from redis:191
Time to get 5000 keys from redis:119
Time to desrialize 5000 keys from redis:184
Time to get 5000 keys from redis:113
Time to desrialize 5000 keys from redis:190
Time to get 5000 keys from redis:95
Time to desrialize 5000 keys from redis:217
Redis get time:333


My desktop with dual core
Time to get 5000 keys from redis:60
Time to desrialize 5000 keys from redis:57
Time to get 5000 keys from redis:116
Time to desrialize 5000 keys from redis:99
Time to get 5000 keys from redis:167
Time to desrialize 5000 keys from redis:99
Time to get 2165 keys from redis:129
Time to desrialize 2165 keys from redis:28
Time to get 5000 keys from redis:215
Time to desrialize 5000 keys from redis:70
Redis get time:294

Below is my code for this test:

var objsToGetBatched = objIds.Split(5000);
var outputs = new Dictionary<string, MyObject>();
Parallel.ForEach(objsToGetBatched, //new ParallelOptions { 
MaxDegreeOfParallelism = Environment.ProcessorCount }, 
batch =>
{
var resultForBatch = _redisClient.GetAll(batch.Select(id => id.ToString()));

lock (Lock)
{
foreach (var kv in resultForBatch)
{
outputs.Add(kv.Key, kv.Value);
}
}
}); 
public IDictionary<string, T> GetAll(IEnumerable<string> keys)
{
IDictionary<string, T> results = new Dictionary<string, T>();
try
{
var keysCnt = keys.Count();
Stopwatch sw = new Stopwatch();
sw.Start();
IDatabaseAsync db = _redis.GetDatabase();
RedisKey[] rKeys = new RedisKey[keysCnt];
int idx = 0;
foreach (var key in keys)
{
rKeys[idx++] = key;
}

var task = db.StringGetAsync(rKeys);
db.Wait(task);
sw.Stop();

Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (idx = 0; idx < task.Result.Length; idx++)
{
if (task.Result[idx] != RedisValue.Null)
{
results.Add(rKeys[idx], _serializer.Deserialize(task.Result[idx]));
}
}
sw1.Stop();
Console.WriteLine("Time to get {0} keys from redis:{1}", keysCnt, 
sw.ElapsedMilliseconds);
Console.WriteLine("Time to desrialize {0} keys from redis:{1}", keysCnt, 
sw1.ElapsedMilliseconds);
}
catch (Exception ex)
{
EventLogWriter.Error(ex, "Error Redis::StringGetAsync.");
}
return results;
}

On Wednesday, March 27, 2019 at 4:52:40 AM UTC-7, Marc Gravell wrote:
>
> Hi; I'd love to help you on this - I'm the protobuf-net author, and I also 
> know more than a little about redis; it *might* be a little off-piste for 
> this group though. Running some tests locally with 5000 instances, I get 
> times like 1ms, but it might be that I'm misunderstanding your object 
> model. It would be great to perhaps see a realistic payload that I can play 
> with to see what's going on here.
>
> Question: how fast is "fast enough" here, and how big are your payloads?
>
> I have some incomplete changes in the pipe which at a stroke double the 
> throughput, but without knowing the answers to these questions, it is hard 
> to know whether that is sufficient
>
> Depending on your target scenario, if the problem here turns out to be 
> allocations (of either the SubObj or the array), I have recently been doing 
> a lot of work in "arena allocators" which might be directly relevant - I 
> have a future plan to allow protobuf-net to consume / use the arena 
> allocator I've been working on, which would eliminate both of those 
> completely.
>
> Example output from my test (the first serialize/deserialize is always 
> more expensive) - which is around 68k in payload size:
>
> Serialize: 47ms
> Serialize: 1ms
> Serialize: 1ms
> Serialize: 1ms
> Serialize: 1ms
> Deserialize: 6ms
> Deserialize: 1ms
> Deserialize: 1ms
> Deserialize: 1ms
> Deserialize: 1ms
>
> My test code: 
> https://gist.github.com/mgravell/7afc08f432661f60138f6798efe2b15b
>
> If you want to email me directly (especially if your data contains "real" 
> things that you don't want to send to the entire group), please feel free 
> to do so.
>
> On Tue, 26 Mar 2019 at 18:32, Shweta Sharma <[email protected] 
> <javascript:>> wrote:
>
>> I am prototyping an application to store data in redis using 
>> StackExhange.Redis client. I am using Protobuf for serializing my objects 
>> and storing them as key/value pair in redis. When reading 20K keys using 
>> pipeline (splitting keys in batches of 5000), I observe deserialization 
>> times in 60-300 ms for a batch of say 5000 objects. Serialization code is 
>> using protobuf-net library version. Deserialization code is using 
>> protobuf-net 2.4.0.0.
>>
>> Is there anyway I can speed up desrialization time since this will impact 
>> my api performance.
>>
>> The object I am trying to desrialize is not too complex.
>>
>> [ProtoContract]
>>     public struct MyObject
>>     {
>>         [ProtoMember(1)]
>>         public int Key { get; set; }
>>         [ProtoMember(2)]
>>         public byte Attr1 { get; set; }
>>         [ProtoMember(3)]
>>         public bool IsUsed { get; set; }
>>         [ProtoMember(4)]
>>         public byte Attr2 { get; set; }
>>         [ProtoMember(5)]
>>         public SubObj[] SubObjects { get; set; }
>>
>>         public static MyObject Null
>>         {
>>             get
>>             {
>>                 return default(MyObject);
>>             }
>>         }
>>
>>         public bool IsNull
>>         {
>>             get { return Attr1 == default(byte) && SubObjects == null; }
>>         }
>>     }
>> [ProtoContract]
>>     public class SubObj
>>     {
>>         [ProtoMember(1)]
>>         public int StartOffset { get; set; }
>>         [ProtoMember(2)]
>>         public int EndOffset { get; set; }
>>         [ProtoMember(3)]
>>         public byte Attr1 { get; set; }
>>         [ProtoMember(4)]
>>         public byte Attr2 { get; set; }
>>         [ProtoMember(5)]
>>         public bool IsUsed { get; set; }
>>
>>         public static SubObj Null
>>         {
>>             get
>>             {
>>                 return default(SubObj);
>>             }
>>         }
>>
>>         public bool IsNull
>>         {
>>             get { return Attr1 == default(byte); }
>>         } 
>>     }
>>
>> Here're some sample timings using StopWatch:
>>
>> Time to desrialize 5000 keys from redis:134
>> Time to desrialize 5000 keys from redis:147
>> Time to desrialize 5000 keys from redis:160
>> Time to desrialize 5000 keys from redis:65
>> Time to desrialize 5000 keys from redis:67
>> Time to desrialize 5000 keys from redis:242
>> Time to desrialize 5000 keys from redis:371
>> Time to desrialize 5000 keys from redis:190
>>
>> Thanks,
>> Shweta
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Protocol Buffers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/protobuf.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> -- 
> Regards, 
>
> Marc
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Reply via email to