Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On Monday, 7 October 2019 18:08:27 PDT Roland Hughes wrote:
> There was a time when a Gig of storage would occupy multiple floors of
> the Sears Tower and the paper weight was unreal.

Have you ever heard of Claude Shannon?

Anyway, you can't get more data into storage than there are possible states of 
matter. As far as our *physics* knows, you could maybe store a byte per 
electron. That would weigh 5 billion tons to store 16 * 2^128 bytes.

We have absolutely no clue how to have that many electrons in one place 
without protons and without violating the Pauli Exclusion principle.

> According to this undated (I *hate* that!) BBC Science article at some
> point in time Google, Amazon, Microsoft and Facebook combined had 1.2
> million terabytes of storage. By your calculations, shouldn't putting
> that much storage on one coast shifted the planet's orbit? 

How about you do some math before spouting nonsense?

1.2 million terabytes is 2^60 bytes. Which is NOWHERE NEAR the mass I talked 
about for 2^132 bytes. At the estimate I used of 21 ng/byte, the total is only 
25200 metric tonnes.

> As I said, the hackers don't need the entire thing. If they are sniffing
> a CC processor handling a million transactions per day (not unreasonable
> especially during back-to-school, on Saturday or during holiday shopping
> season)
> 
> https://www.statista.com/statistics/261327/number-of-per-card-credit-card-tr
> ansactions-worldwide-by-brand-as-of-2011/
> 
> At any rate, enough rows in the DB to achieve a 1% penetration rate
> gives them 10,000 compromised credit cards via an automated process. A
> tenth of a percent is 1,000. Not a bad haul.

Sure. How many entries in the DB do you need to generate a 0.1% hit rate?

I don't know how to calculate that, so I'm going to guess that you need one 
trillionth of the total space for that.

One trillionth of 2^128 possibilities is roughly 2^88. Times 16 bytes per 
entry, with no overhead, we have 2^92 bytes. Times 1 picogram per byte is 5 
billion tons. More importantly, 2^92 bytes is orders of magnitude more storage 
than exists today. The NSA Datacentre in Utah is estimated to handle 12 
exabytes, so let's estimate the total storage in existence today is 100 
exabytes. That's 50 million times too little to store one trillionth of the 
problem space.

I don't doubt that there are hackers that have dedicated DCs to cracking 
credit card processor traffic they may have managed to intercept. But they are 
not doing that by attacking the encryption.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On Monday, 7 October 2019 15:43:21 PDT Roland Hughes wrote:
> No. This technique is cracking without cracking. You are looking for a
> fingerprint. That fingerprint is the opening string for an xml document
> which must be there per the standard. For JSON it is the quote and colon
> stuff mentioned earlier. You take however many bytes from the logged
> packet as the key size the current thread is processing and perform a
> keyed hit against the database. If found, great! If not, shuffle down
> one byte and try again. Repeat until you've exceeded the attempt count
> you are willing to make or found a match. When you find a match you try
> key or key+salt combination on the entire thing. Pass the output to
> something which checks for seemingly valid XML/JSON then either declare
> victory or defeat.

That DOES work with keys produced by OpenSSL that was affected by the Debian 
bug you described. That's because the bug caused the problem space to be 
extremely restricted. You said 32768 (2^15) possibilities.

A non-broken random generator will produce 2^128 possibilities in 128 bits. 
You CANNOT compare fast enough

> These attacks aren't designed for 100% capture/penetration. The workers
> are continually adding new rows to the database table(s). The sniffed
> packets which were not successfully decrypted can basically be stored
> until you decrypt them or your drives fail or that you decide any
> packets more than N-weeks old will be purged.

You seem to be arguing for brute-force attacks until one gets lucky. That is 
possible. But the chances of being lucky in finding a key are probably worse 
than winning $1 billion in the lottery. Much worse.

So it can happen. But the chance that it does happen and that the captured 
packet contains critical information is infinitesimal.

> The success rate of such an attack improves over time because the
> database gets larger by the hour. Rate of growth depends on how many
> machines are feeding it. Really insidious outfits would sniff a little
> from a bunch of CC or mortgage or whatever processing services,
> spreading out the damage so standard track back techniques wouldn't
> work. The only thing the victims would have in common is that they used
> a credit card or applied for a mortgage but they aren't all from the
> same place.

Sure, it improves, but probably slowly. This procedure is limited by computing 
power, storage and the operating costs. Breaking encryption by brute-force 
like you're suggesting is unlikely to produce a profit: it'll cost more than 
the gain once cracked.

Crackers don't attack the strongest part of the TLS model, which is the 
encryption. They attack the people and the side-channels.

> If correct that means they (the nefarious people) could have started
> their botnets, or just local machines, building such a database by some
> time in 2011 if they were interested. That's 8+ years. They don't _need_
> 100% coverage.

No, they don't need 100% coverage. But they need coverage such that the 
probability of matching is sufficient that it'll pay the operating costs. In 8 
years, assuming 1 billion combinations generated every second, we're talking 
about 242 quadrillion combinations generated. Assuming 64 bits per entry and 
no overhead, that's 2exabytes to store. Current cost of Amazon S3 Infrequent 
Access storage is 1¢/GB, so it would cost $20M per month.

And that amounts to 1.3% of the problem space. Which is why we don't use 64-
bit keys.

If we talk about 128-bit keys, it's $40M per month for a coverage rate of 7.5 
* 10^(-22). You'll reach 1 part per billion coverage in 10 trillion years, 
assuming constant processing power.

Calculating assuming an infinite Moore's Law is left as an exercise for the 
reader.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 6:21 PM, Thiago Macieira wrote:

On segunda-feira, 7 de outubro de 2019 07:06:07 PDT Roland Hughes wrote:

We now
have the storage and computing power available to create the database or
2^128  database tables if needed.

Do you know how ludicrous this statement is?

Let's say you had 128 bits for each of the 2^128  entries, with no overhead,
and each bit weighed 1 picogram (a 8 GB RAM DIMM weighs 185 g, which is 21 ng/
byte). You'll need a storage of 4.3 * 10^25  kg, or about 7.3 times the mass of
the Earth.

Let's say that creating such a table takes an average of 1 attosecond per
entry, or one million entries per nanosecond. Note I'm saying your farm is
producing 10^18  entries per second, reaching at least 1 exaflops, producing
about 16 exabytes per second of data. You'll need 10 trillion years to
calculate.

The only way this is possible is if you significantly break the problem such
that you don't need 2^128  entries. For example, 2^80  entries would weigh
"only" 155 million tons and that's only 16 yottabytes of storage, taking only
14 days to run in that magic[*] farm, with magic connectivity and magic
storage.

[*] After applying Clarke's Third Law.


LOL,

Glad I could help you vent!

There was a time when a Gig of storage would occupy multiple floors of 
the Sears Tower and the paper weight was unreal.


This Gorilla 128GB USB 3 thumb drive weighs almost exactly the same as 
the Lexar 32GB 2.0 thumb drive (I didn't put them on the scale, just 
hand balanced) yet one holds 4 times the other. They both appear to 
weigh less than this LS-120 Super Floppy which only holds 120MEG.


The 6TB drive which just arrived I did put on the postage scale and it 
weighed 22 ounces.  According to this link the 12 TB weighs 1.46 lbs. or 
almost the same, just a skooch over 23 ounces. The new 15TB is 660 grams


https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/data-center-drives/ultrastar-dc-hc600-series/data-sheet-ultrastar-dc-hc620.pdf

This 60TB RAID array does tip the scales at 22lbs though.

https://www.polylinecorp.com/lacie-60tb-6big-6-bay-thunderbolt-3-raid-array.html

I cannot find a weight for the Nimbus 100TB

https://nimbusdata.com/products/exadrive-platform/advantages/

According to this undated (I *hate* that!) BBC Science article at some 
point in time Google, Amazon, Microsoft and Facebook combined had 1.2 
million terabytes of storage. By your calculations, shouldn't putting 
that much storage on one coast shifted the planet's orbit? 


As I said, the hackers don't need the entire thing. If they are sniffing 
a CC processor handling a million transactions per day (not unreasonable 
especially during back-to-school, on Saturday or during holiday shopping 
season)


https://www.statista.com/statistics/261327/number-of-per-card-credit-card-transactions-worldwide-by-brand-as-of-2011/

At any rate, enough rows in the DB to achieve a 1% penetration rate 
gives them 10,000 compromised credit cards via an automated process. A 
tenth of a percent is 1,000. Not a bad haul.


Please keep in mind that what they need is the architecture and a 
functional sampling. They don't need everything to achieve that.



--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 6:21 PM, Thiago Macieira wrote:

On segunda-feira, 7 de outubro de 2019 05:31:17 PDT Roland Hughes wrote:

Let us not forget we are at the end of the x86 era when it comes to what
evil-doers will use to generate a fingerprint database, or brute force
crack.

https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break
-2048-bit-rsa-encryption-in-8-hours/

[Now Gidney and Ekerå have shown how a quantum computer could do the
calculation with just 20 million qubits. Indeed, they show that such a
device would take just eight hours to complete the calculation.  “[As a
result], the worst case estimate of how many qubits will be needed to
factor 2048 bit RSA integers has dropped nearly two orders of
magnitude,” they say.]

Oh, only 20 million qubits? That's good to know, because current quantum
computers have something like 100 or 200.

Not 100 million qubits, 100 qubits.


Kids these days!

When I started in IT a Gigabyte wasn't even conceivable. The term 
Terabyte hadn't even been created so it was beyond science fiction.




Yes, I know that Shor's Theorem says it could solve the prime multiplication
that is in the core of RSA and many other public key encryption mechanisms in
O(1) time. But no one has ever proven the Theorem and put it into practice,
yet.

And there are all the quantum-resistant algorithms, some of which are already
deployed (like AES), some of which are in development.

A bullet resistant vest is resistant until someone builds a better bullet.



While there are those here claiming 128-bit and 256-bit are
"uncrackable" people with money long since moved to 2048-bit because 128
and 256 are the new 64-bit encryption levels. They know that an entity
wanting to decrypt their sniffed packets doesn't need the complete
database, just a few fingerprints which work relatively reliably. They
won't get everything, but they might get the critical stuff.

You're confusing algorithms. RSA asymmetric encryption today requires more
than 1024 bits, 2048 recommended, 4096 even better. AES is symmetric
encryption and requires nowhere near that much, 128 is sufficient, 256 is very
good. Elliptic curves are also asymmetric and require much less than 1024
bits.
No, I wasn't, but sorry for causing confusion. I didn't mean OpenSource 
or published standard when I said "people with money." Just skip that.



Haven't you noticed a pattern over the decades?

X-bit encryption would take a "super computer" (never actually
identifying which one) N-years running flat out to crack.

A few years later

Y-bit encryption would take a "super computer" (never actually
identifying which one) N-years running flat out to crack (without any
mention of why they were/are wrong about X-bit).

Oh! You wanted "Why?" Sorry.
Again, you're deliberately misleading people here. The supercomputers*are*  
identified. And the fact that technology progresses is no surprise. It's

*expected*  and accounted for. That's why the number of bits in most ciphers is
increasing, that's why older ciphers are completely dropped, that's why we're
getting new ones and new versions of TLS.
You know. I have *never* heard them identified. The Y-bit encryption is 
what I hear each and every time someone spouts off about how secure 
something is. They never identify the machine and they never under any 
circumstances admit that the very first combination tried at "random" 
just might succeed. The calculation/estimate *always* assumes it is the 
last possible entry which will decrypt the packet and that such a feat 
will *always* be the case.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] Interest Digest, Vol 97, Issue 7

2019-10-07 Thread Roland Hughes
Be a lot easier to do if people would quit badgering me trying to tell 
me the Easter Bunny is real, Tinkerbell told them. 


The drive arrives tomorrow I think. If I'm not interrupted too often I 
can start writing code for the various pieces. When I get it all done 
"some" of it will be on the blog in a series of posts. I personally 
would never put all of it out there because of the potential legal 
ramifications.


On 10/7/19 6:21 PM, interest-requ...@qt-project.org wrote:

Yes, I also know about the lizardmen from Phobos who can crack SSL/TLS
keys instantly.
If you can show some code all this would be much more credible. After
all, this is a Qt mailing list, not a science fiction one.

Rgrds Henry


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] Licensing

2019-10-07 Thread Melinda Seifert
Nikos,
Actually that is incorrect. You can use commercial if you previously used Open 
Source but it’s on a case by case basis and you need to get approval from the 
Qt company. 

Sent from my iPhone
Regards,
Melinda Seifert 
Director of the Americas
melinda.seif...@qt.io
(O) 617-377-7918
(C) 617-414-4479
www.qt.io


> On Oct 7, 2019, at 6:42 PM, Nikos Chantziaras  wrote:
> 
> Note that there is (or was?) a restriction in the commercial license. You 
> are not allowed to use commercial Qt if you previously uses open source Qt in 
> the project. So you might not even be allowed to switch from open source to 
> commercial.
> 
> Not sure if that (very) weird term has been removed now or not, but it was 
> there a while ago.
> 
> 
>> On 07/10/2019 18:57, Colin Worth wrote:
>> Thanks Giuseppe, Jerome, and Uwe. All of this makes sense to me. I will have 
>> to talk to our software and management people and decide what our best route 
>> is. Incidentally, we will also need FDA certification for this product. This 
>> is all a bit preliminary. The product is still in development. I’m in touch 
>> with the Qt office in Boston as well.
>> Cheers,
>> Colin
 On Oct 7, 2019, at 1:55 AM, Uwe Rathmann  wrote:
>>> 
 On 10/6/19 12:03 PM, Giuseppe D'Angelo via Interest wrote:
 
 Hey, I linked it two emails ago :-)
>>> 
>>> Ah yes, sorry.
>>> 
>>> My response was initially more explicit about FUD, before I decided, that 
>>> it is not worth the effort.
>>> 
>>> Uwe
> ___
> Interest mailing list
> Interest@qt-project.org
> https://lists.qt-project.org/listinfo/interest
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes
From the haze of the smoke. And the mescaline. - The Airborne Toxic 
Event "Wishing Well"


On 10/7/19 3:46 PM, Matthew Woehlke wrote:

On 04/10/2019 20.17, Roland Hughes wrote:



Even if all of that stuff has been fixed, you have to be absolutely
certain the encryption method you choose doesn't leave its own tell-tale
fingerprint. Some used to have visible oddities in the output when they
encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
a few places like these showing up on-line.

Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the*entire*
message, that is clear grounds for improvement.


Sorry for having to invert part of this but the answer to this part 
should make the rest clearer.


I've never once interjected the concept of partial decryption. Someone 
else tossed that Red Herring into the soup. It has no place in the 
conversation.


The concept here is encrypting a short string which is a "fingerprint" 
known to exist in the target data over and over again with different 
keys and in the case of some methods salts as well. These get recorded 
into a database. If the encrypted message is in a QByteArray you use a 
walking window down the first N bytes performing keyed hits to find a 
matching sequence and when found you generally know what was used, sans 
a birthday collision.


Some people like to call these "Rainbow Tables" but I don't. This is a 
standard Big Data problem solving technique.


As for the nested encryption issue, we never did root cause analysis. We 
encountered some repeatable issues and moved on. It could have had 
something to do with the Debian bug where a maintainer "fixed" some 
Valgrind messages by limiting the keys to 32768. We were testing 
transmissions across architectures and I seem to remember it only broke 
in one direction. Long time ago. Used a lot of Chardonnay to purge those 
memories.



On 10/3/19 5:00 AM, Matthew Woehlke wrote:

On 01/10/2019 20.47, Roland Hughes wrote:

To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.

How is this not just "security through obscurity"? That's almost
universally regarded as equivalent to "no security at all". If you're
going to claim that this is suddenly not the case, you'd best have
some *really* impressive evidence to back it up. Put differently, how
is this different from just throwing another layer of
encry^Wenciphering on your data and calling it a day?

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Your "secrecy" is the key+algorithm combination. When that secret is
learned you are no longer secure. People lull themselves into a false
sense of security regurgitating another Urban Legend.

Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:

- "Encryption" tries to make it computationally hard to decode a message.

- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)

Thanks for agreeing.


...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?


No. This technique is cracking without cracking. You are looking for a 
fingerprint. That fingerprint is the opening string for an xml document 
which must be there per the standard. For JSON it is the quote and colon 
stuff mentioned earlier. You take however many bytes from the logged 
packet as the key size the current thread is processing and perform a 
keyed hit against the database. If found, great! If not, shuffle down 
one byte and try again. Repeat until you've exceeded the attempt count 
you are willing to make or found a match. When you find a match you try 
key or key+salt combination on the entire thing. Pass the output to 
something which checks for seemingly valid XML/JSON then either declare 
victory or defeat.


If the fingerprint isn't in the data, you cannot use this technique. You 
can't, generally, just Base64 your XML/JSON prior to sending it out 
because they usually create tables 

Re: [Interest] Licensing

2019-10-07 Thread Nikos Chantziaras
Note that there is (or was?) a restriction in the commercial license. 
You are not allowed to use commercial Qt if you previously uses open 
source Qt in the project. So you might not even be allowed to switch 
from open source to commercial.


Not sure if that (very) weird term has been removed now or not, but it 
was there a while ago.



On 07/10/2019 18:57, Colin Worth wrote:

Thanks Giuseppe, Jerome, and Uwe. All of this makes sense to me. I will have to 
talk to our software and management people and decide what our best route is. 
Incidentally, we will also need FDA certification for this product. This is all 
a bit preliminary. The product is still in development. I’m in touch with the 
Qt office in Boston as well.

Cheers,
Colin


On Oct 7, 2019, at 1:55 AM, Uwe Rathmann  wrote:


On 10/6/19 12:03 PM, Giuseppe D'Angelo via Interest wrote:

Hey, I linked it two emails ago :-)


Ah yes, sorry.

My response was initially more explicit about FUD, before I decided, that it is 
not worth the effort.

Uwe

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Matthew Woehlke
On 04/10/2019 20.17, Roland Hughes wrote:
> On 10/3/19 5:00 AM, Matthew Woehlke wrote:
>> On 01/10/2019 20.47, Roland Hughes wrote:
>>> To really secure transmitted data, you cannot use an open standard which
>>> has readily identifiable fields. Companies needing great security are
>>> moving to proprietary record layouts containing binary data. Not a
>>> "classic" record layout with contiguous fields, but a scattered layout
>>> placing single field bytes all over the place. For the "free text"
>>> portions like name and address not only in reverse byte order, but
>>> performing a translate under mask first. Object Oriented languages have
>>> a bit of trouble operating in this world but older 3GLs where one can
>>> have multiple record types/structures mapped to a single buffer (think a
>>> union of packed structures in C) can process this data rather quickly.
>>
>> How is this not just "security through obscurity"? That's almost
>> universally regarded as equivalent to "no security at all". If you're
>> going to claim that this is suddenly not the case, you'd best have
>> some *really* impressive evidence to back it up. Put differently, how
>> is this different from just throwing another layer of
>> encry^Wenciphering on your data and calling it a day? 
>
> _ALL_ electronic encryption is security by obscurity.
> 
> Take a moment and let that sink in because it is fact.
> 
> Your "secrecy" is the key+algorithm combination. When that secret is
> learned you are no longer secure. People lull themselves into a false
> sense of security regurgitating another Urban Legend.

Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:

- "Encryption" tries to make it computationally hard to decode a message.

- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)

...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?

> One of the very nice things about today's dark world is that most are
> script-kiddies. If they firmly believe they have correctly decrypted
> your TLS/SSL packet yet still see garbage, they assume another layer of
> encryption. They haven't been in IT long enough to know anything about
> data striping or ICM (Insert Character under Mask).

So... again, you're proposing that replacing a "hard" (or not, according
to you) problem with an *easier* problem will improve security?

I suppose it might *in the short term*. In the longer term, that seems
like a losing strategy.

> He came up with a set of test cases and sure enough, this system which
> worked fine with simple XML, JSON, email and text files started
> producing corrupted data at the far end with the edge cases.

Well, I would certainly be concerned about an encryption algorithm that
is unable to reproduce its input. That sounds like a recipe guaranteed
to eventually corrupt someone's data.

> Even if all of that stuff has been fixed, you have to be absolutely
> certain the encryption method you choose doesn't leave its own tell-tale
> fingerprint. Some used to have visible oddities in the output when they
> encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
> a few places like these showing up on-line.

Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the *entire*
message, that is clear grounds for improvement.

-- 
Matthew
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


[Interest] StateMachine questions

2019-10-07 Thread Jason H
I'm reading up about the state machine stuff and was wondering how I would go 
about having X tests with the same Y steps. Unlike parallel states, these are 
still sequential:

State  / SubState
TEST_1 / PREPARE
TEST_1 / PREPARE_COMPLETE
TEST_1 / EXECUTE
TEST_1 / EXECUTE_COMPLETE
TEST_1 / RESULT
TEST_2 / PREPARE
...

Is there an easy way to code this so it is not X*Y?

And how do I provide a UI to follow the state? There doesn't seem to be a 
StateMachine.currentState property, otherwise I could do:
Text {
text: test[stateMachine.currentState] + " / " + 
test_steps[subStateMachine.currentState]
}

I guess I could just provide a name property, but there's still no 
currentState, I have to evaluate the `active` property against every possible 
state at every transition?




___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] heads up - ksplashqml crash

2019-10-07 Thread Roland Hughes
A lot of people don't have any money. They use a corporate or relative's 
cast-off. Still others, like the older computers I have on the workbench 
behind me, have been re-purposed to run BOINC. They won't be upgraded. 
If a drive dies and I have a spare laying around I'll put that in and 
re-load a distro, but by and large the machines won't be fixed. They'll 
search for the cure for cancer and other projects beneficial to humans 
as well as help look for ET, but unless I need to throw a lot of 
machines at something, that is how they will spend their remaining years.


On 10/7/19 10:06 AM, Konstantin Tokarev wrote:


07.10.2019, 18:00, "Roland Hughes" :

My hardware was new enough I could install 390. Others will most likely
not be as lucky.

It may be a good reason to replace their 9 years old GPU to something 
up-to-date.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On segunda-feira, 7 de outubro de 2019 07:06:07 PDT Roland Hughes wrote:
> We now
> have the storage and computing power available to create the database or
> 2^128 database tables if needed.

Do you know how ludicrous this statement is?

Let's say you had 128 bits for each of the 2^128 entries, with no overhead, 
and each bit weighed 1 picogram (a 8 GB RAM DIMM weighs 185 g, which is 21 ng/
byte). You'll need a storage of 4.3 * 10^25 kg, or about 7.3 times the mass of 
the Earth.

Let's say that creating such a table takes an average of 1 attosecond per 
entry, or one million entries per nanosecond. Note I'm saying your farm is 
producing 10^18 entries per second, reaching at least 1 exaflops, producing 
about 16 exabytes per second of data. You'll need 10 trillion years to 
calculate.

The only way this is possible is if you significantly break the problem such 
that you don't need 2^128 entries. For example, 2^80 entries would weigh 
"only" 155 million tons and that's only 16 yottabytes of storage, taking only 
14 days to run in that magic[*] farm, with magic connectivity and magic 
storage.

[*] After applying Clarke's Third Law.
-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On segunda-feira, 7 de outubro de 2019 05:31:17 PDT Roland Hughes wrote:
> Screaming about the size of the forest one will hide there tree in
> doesn't change the security by obscurity aspect of it. Thumping the desk
> and claiming a forest which is 2^128 * 2^key-bit-width doesn't mean you
> aren't relying on obscurity, especially when they know what tree they
> are looking for.

It's not the usual definition of "security by obscurity". That's usually 
applied to something that is not secure at all, just unknown. Encryption 
algorithms hide nothing in their implementation.

They do hide the key, true. The important thing is that it takes more time to 
brute-force the key than an attacker could reasonably dedicate.

> Let us not forget we are at the end of the x86 era when it comes to what
> evil-doers will use to generate a fingerprint database, or brute force
> crack.
> 
> https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break
> -2048-bit-rsa-encryption-in-8-hours/
> 
> [Now Gidney and Ekerå have shown how a quantum computer could do the
> calculation with just 20 million qubits. Indeed, they show that such a
> device would take just eight hours to complete the calculation.  “[As a
> result], the worst case estimate of how many qubits will be needed to
> factor 2048 bit RSA integers has dropped nearly two orders of
> magnitude,” they say.]

Oh, only 20 million qubits? That's good to know, because current quantum 
computers have something like 100 or 200.

Not 100 million qubits, 100 qubits.

Yes, I know that Shor's Theorem says it could solve the prime multiplication 
that is in the core of RSA and many other public key encryption mechanisms in 
O(1) time. But no one has ever proven the Theorem and put it into practice, 
yet.

And there are all the quantum-resistant algorithms, some of which are already 
deployed (like AES), some of which are in development.

> While there are those here claiming 128-bit and 256-bit are
> "uncrackable" people with money long since moved to 2048-bit because 128
> and 256 are the new 64-bit encryption levels. They know that an entity
> wanting to decrypt their sniffed packets doesn't need the complete
> database, just a few fingerprints which work relatively reliably. They
> won't get everything, but they might get the critical stuff.

You're confusing algorithms. RSA asymmetric encryption today requires more 
than 1024 bits, 2048 recommended, 4096 even better. AES is symmetric 
encryption and requires nowhere near that much, 128 is sufficient, 256 is very 
good. Elliptic curves are also asymmetric and require much less than 1024 
bits.

> Haven't you noticed a pattern over the decades?
> 
> X-bit encryption would take a "super computer" (never actually
> identifying which one) N-years running flat out to crack.
> 
> A few years later
> 
> Y-bit encryption would take a "super computer" (never actually
> identifying which one) N-years running flat out to crack (without any
> mention of why they were/are wrong about X-bit).
> 
> Oh! You wanted "Why?" Sorry.

Again, you're deliberately misleading people here. The supercomputers *are* 
identified. And the fact that technology progresses is no surprise. It's 
*expected* and accounted for. That's why the number of bits in most ciphers is 
increasing, that's why older ciphers are completely dropped, that's why we're 
getting new ones and new versions of TLS.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] Licensing

2019-10-07 Thread Colin Worth
Thanks Giuseppe, Jerome, and Uwe. All of this makes sense to me. I will have to 
talk to our software and management people and decide what our best route is. 
Incidentally, we will also need FDA certification for this product. This is all 
a bit preliminary. The product is still in development. I’m in touch with the 
Qt office in Boston as well.

Cheers,
Colin

> On Oct 7, 2019, at 1:55 AM, Uwe Rathmann  wrote:
> 
>> On 10/6/19 12:03 PM, Giuseppe D'Angelo via Interest wrote:
>> 
>> Hey, I linked it two emails ago :-)
> 
> Ah yes, sorry.
> 
> My response was initially more explicit about FUD, before I decided, that it 
> is not worth the effort.
> 
> Uwe
> 
> 
> 
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] heads up - ksplashqml crash

2019-10-07 Thread Konstantin Tokarev


07.10.2019, 18:00, "Roland Hughes" :
> My hardware was new enough I could install 390. Others will most likely
> not be as lucky. 

It may be a good reason to replace their 9 years old GPU to something 
up-to-date.

-- 
Regards,
Konstantin

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] heads up - ksplashqml crash

2019-10-07 Thread Roland Hughes
My hardware was new enough I could install 390. Others will most likely 
not be as lucky. I was just sending up a flare making he assumption KDE 
developer will try to kick this up the chain. Once I found a work around 
I didn't care enough to diff the 340 and 390 APIs. 340 has been around a 
very long time an appears to be quite stable. My knee-jerk suspicion 
would be an API difference with "current" code written against newer 
API. You could be correct though. This could be a long latent bug now 
exposed.


What is interesting is the fact this didn't surface until KDE pushed out 
recent updates. After that the crash happened with each boot. Before 
that, nada.


On 10/7/2019 9:29 AM, Sérgio Martins wrote:

On Sat, Oct 5, 2019 at 11:22 PM Roland Hughes
 wrote:

Just in case this is deeper than the ksplashqml.

https://bugs.kde.org/show_bug.cgi?id=408904

Don't know if it is specific to KDE or deeper within Qt. Appears to
happen with nvidia-340 driver but doesn't happen with nvidia-driver-390
installed.

If the "display crashes", goes black for all apps and it's only repro
on an older nvidia driver, as stated in the bug report, then I'd say
the bug is not in Qt, but in the nvidia 340 driver.

If you hope for a workaround in Qt then better get a minimal test-case.

Regards,
Sergio


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593  (cell)
http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Henry Skoglund
Yes, I also know about the lizardmen from Phobos who can crack SSL/TLS 
keys instantly.
If you can show some code all this would be much more credible. After 
all, this is a Qt mailing list, not a science fiction one.


Rgrds Henry


On 2019-10-07 16:06, Roland Hughes wrote:


On 10/7/19 5:00 AM, Thiago Macieira wrote:
You do realise that's not how modern encryption works, right? You do 
realise
that SSL/TLS rekeys periodically to avoid even a compromised key from 
going
further? That's what the "data limit for all ciphersuits" means: 
rekey after a

while.

yeah.
You're apparently willfully ignoring the fact that the same cleartext 
will not
result in the same ciphertext when repeated in the transmission, even 
between

two rekey events.


No. We are working from two completely different premises. It appears 
to me your premise is they have to be able to decrypt 100% of the 
packets 100% of the time. That's not the premise here.


The premise here is they don't need it all to work today. They just 
need to know that a Merchant account service receives XML/JSON and 
responds in kind. The transaction is the same for a phone app, Web 
site, even when you are standing at that mom & pop retailer physically 
using your card. They (whoever they are) sniff packets to/from the IP 
addresses of the service logging them to disk drives.


The dispatch service hands 100-1000 key combinations out at a time to 
worker computers generating fingerprints for the database. These 
computers could be a botnet, leased from a hosting service or machines 
they own. The receiver service stores the key results in the database.


A credit card processing service of sufficient size will go through a 
massive number of salts and keys, especially with the approaching 
Holiday shopping season. 1280 bytes "should" be more than enough to 
contain a credit card authorization request so this scenario is only 
interested in fast cracking a single packet. Yes, the CC number and 
some other information may well have additional obfuscation but that 
will also be a mechanical process.


Periodically a batch job wakes up and runs the sniffed packets against 
the database looking for matching fingerprints. When it fast cracks 
one it moves it to a different drive/raid array, storage area for the 
next step. This process goes in steps until they have full CC 
information with a transaction approval, weeding out the declined cards.


When the sniffed packet storage falls below some threshold the sniffer 
portion is reactivated to retrieve more packets.


This entire time workers are adding more and more entries to the 
fingerprint database.


These people don't need them all. They are patient. This process is 
automated. They might even configure it to send an email when another 
100 or 1000 valid CCs so they can either sell them on the Dark Web or 
send them through the "buying agent" network.


Yeah, "buying agent" network might need a bit of explanation. Some of 
you may have seen those "work from home" scams where they want a 
"shipping consolidation" person to receive items and repackage them 
into bulk packs for overseas (or wherever) shipping. They want a fall 
person to receive the higher end merchandise which they then bulk ship 
to someone who will sell it on eBay/Amazon/etc.


The CC companies constantly scan for "unusual activity" and call you 
when your card has been compromised. This works when the individuals 
are working with limited information. They have the CC information, 
but they don't have the "where you shop" information. The ones which 
have the information about where you routinely use the card can have a 
better informed "buying agent" network and slow bleed the card without 
tripping the fraud alert systems. If you routinely use said card at 
say, Walmart, 2-3 times per week for purchases of $100-$500 they can 
make one more purchase per week in that price range until you are 
maxed out or start matching up charges with receipts.


The people I get asked to think about are playing a long game. They 
aren't looking to send a crew to Chicago to take out $100 cash 
advances on a million cards bought on the Dark Web or do something 
like this crew did:


https://www.mcall.com/news/watchdog/mc-counterfeit-credit-cards-identity-theft-watchdog-20160625-column.html 



Or the guy who just got 8 years for running such a ring in Las Vegas. 
That's the most recent one turning up in a quick search.


Maybe they are looking to do just that, but are looking for more 
information?


At any rate, the "no-breach" scenario is being seriously looked at. 
Yes, the Salt will change with every packet and the key might well 
change with every packet but these players are only looking to crack a 
subset of packets. Most organizations won't have the infrastructure to 
utilize a billion compromised credit cards. They can handle a few 
hundred to a few thousand per month.


In short, they don't need _everything_. They just need enough to get 
that

Re: [Interest] heads up - ksplashqml crash

2019-10-07 Thread Sérgio Martins
On Sat, Oct 5, 2019 at 11:22 PM Roland Hughes
 wrote:
>
> Just in case this is deeper than the ksplashqml.
>
> https://bugs.kde.org/show_bug.cgi?id=408904
>
> Don't know if it is specific to KDE or deeper within Qt. Appears to
> happen with nvidia-340 driver but doesn't happen with nvidia-driver-390
> installed.

If the "display crashes", goes black for all apps and it's only repro
on an older nvidia driver, as stated in the bug report, then I'd say
the bug is not in Qt, but in the nvidia 340 driver.

If you hope for a workaround in Qt then better get a minimal test-case.

Regards,
Sergio
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 5:00 AM, Thiago Macieira wrote:
You do realise that's not how modern encryption works, right? You do 
realise

that SSL/TLS rekeys periodically to avoid even a compromised key from going
further? That's what the "data limit for all ciphersuits" means: rekey after a
while.

yeah.

You're apparently willfully ignoring the fact that the same cleartext will not
result in the same ciphertext when repeated in the transmission, even between
two rekey events.


No. We are working from two completely different premises. It appears to 
me your premise is they have to be able to decrypt 100% of the packets 
100% of the time. That's not the premise here.


The premise here is they don't need it all to work today. They just need 
to know that a Merchant account service receives XML/JSON and responds 
in kind. The transaction is the same for a phone app, Web site, even 
when you are standing at that mom & pop retailer physically using your 
card. They (whoever they are) sniff packets to/from the IP addresses of 
the service logging them to disk drives.


The dispatch service hands 100-1000 key combinations out at a time to 
worker computers generating fingerprints for the database. These 
computers could be a botnet, leased from a hosting service or machines 
they own. The receiver service stores the key results in the database.


A credit card processing service of sufficient size will go through a 
massive number of salts and keys, especially with the approaching 
Holiday shopping season. 1280 bytes "should" be more than enough to 
contain a credit card authorization request so this scenario is only 
interested in fast cracking a single packet. Yes, the CC number and some 
other information may well have additional obfuscation but that will 
also be a mechanical process.


Periodically a batch job wakes up and runs the sniffed packets against 
the database looking for matching fingerprints. When it fast cracks one 
it moves it to a different drive/raid array, storage area for the next 
step. This process goes in steps until they have full CC information 
with a transaction approval, weeding out the declined cards.


When the sniffed packet storage falls below some threshold the sniffer 
portion is reactivated to retrieve more packets.


This entire time workers are adding more and more entries to the 
fingerprint database.


These people don't need them all. They are patient. This process is 
automated. They might even configure it to send an email when another 
100 or 1000 valid CCs so they can either sell them on the Dark Web or 
send them through the "buying agent" network.


Yeah, "buying agent" network might need a bit of explanation. Some of 
you may have seen those "work from home" scams where they want a 
"shipping consolidation" person to receive items and repackage them into 
bulk packs for overseas (or wherever) shipping. They want a fall person 
to receive the higher end merchandise which they then bulk ship to 
someone who will sell it on eBay/Amazon/etc.


The CC companies constantly scan for "unusual activity" and call you 
when your card has been compromised. This works when the individuals are 
working with limited information. They have the CC information, but they 
don't have the "where you shop" information. The ones which have the 
information about where you routinely use the card can have a better 
informed "buying agent" network and slow bleed the card without tripping 
the fraud alert systems. If you routinely use said card at say, Walmart, 
2-3 times per week for purchases of $100-$500 they can make one more 
purchase per week in that price range until you are maxed out or start 
matching up charges with receipts.


The people I get asked to think about are playing a long game. They 
aren't looking to send a crew to Chicago to take out $100 cash advances 
on a million cards bought on the Dark Web or do something like this crew 
did:


https://www.mcall.com/news/watchdog/mc-counterfeit-credit-cards-identity-theft-watchdog-20160625-column.html

Or the guy who just got 8 years for running such a ring in Las Vegas. 
That's the most recent one turning up in a quick search.


Maybe they are looking to do just that, but are looking for more 
information?


At any rate, the "no-breach" scenario is being seriously looked at. Yes, 
the Salt will change with every packet and the key might well change 
with every packet but these players are only looking to crack a subset 
of packets. Most organizations won't have the infrastructure to utilize 
a billion compromised credit cards. They can handle a few hundred to a 
few thousand per month.


In short, they don't need _everything_. They just need enough to get 
that much



And don't forget the Initialisation Vector. Even if you could compute the
fingerprint database, you still need to multiply it by 2^128  to account for
all possible IVs.


Perhaps. A few of those won't be used, such as low-values and 
high-values. That also assumes none of t

Re: [Interest] Licensing

2019-10-07 Thread Giuseppe D'Angelo via Interest

Il 07/10/19 07:55, Uwe Rathmann ha scritto:

Ah yes, sorry.

My response was initially more explicit about FUD, before I decided,
that it is not worth the effort.


Huh? It was not my intention to spread FUD. I'm not telling anyone "buy 
a license, you never know..." or "stick to LGPL, don't worry about paid 
licenses, you don't need them".


I'm actually trying to tell the opposite -- remove the all uncertainty 
from your specific use case, by getting an informed opinion by someone 
protecting your interests.


(No, I don't get a % from law firms. :-P)

HTH,
--
Giuseppe D'Angelo | giuseppe.dang...@kdab.com | Senior Software Engineer
KDAB (France) S.A.S., a KDAB Group company
Tel. France +33 (0)4 90 84 08 53, http://www.kdab.com
KDAB - The Qt, C++ and OpenGL Experts



smime.p7s
Description: Firma crittografica S/MIME
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 5:00 AM, Konrad Rosenbaum wrote:

Hi,

On 10/5/19 2:17 AM, Roland Hughes wrote:

_ALL_  electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Okay, out with it! What secret service are you working for and why are
you trying to sell everybody on bullshit that weakens our collective
security?


SCNR, Konrad


LOL,

Konrad,

I haven't had any active clearance in a very long time, assuming nobody 
was lying during those projects early in my career.


This is a world of big data. Infobright, OrientDB, Riak, etc. OpenSource 
and massive, some with data compression up to 40:1. That's assuming you 
don't scope your attacks to the 32TB single table limit of PostgreSQL. 
We have botnets available to evil doers with sizes in the millions.


Screaming about the size of the forest one will hide there tree in 
doesn't change the security by obscurity aspect of it. Thumping the desk 
and claiming a forest which is 2^128 * 2^key-bit-width doesn't mean you 
aren't relying on obscurity, especially when they know what tree they 
are looking for.


Removing the tree is how one has to proceed.

Let us not forget we are at the end of the x86 era when it comes to what 
evil-doers will use to generate a fingerprint database, or brute force 
crack.


https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break-2048-bit-rsa-encryption-in-8-hours/

[Now Gidney and Ekerå have shown how a quantum computer could do the 
calculation with just 20 million qubits. Indeed, they show that such a 
device would take just eight hours to complete the calculation.  “[As a 
result], the worst case estimate of how many qubits will be needed to 
factor 2048 bit RSA integers has dropped nearly two orders of 
magnitude,” they say.]


While there are those here claiming 128-bit and 256-bit are 
"uncrackable" people with money long since moved to 2048-bit because 128 
and 256 are the new 64-bit encryption levels. They know that an entity 
wanting to decrypt their sniffed packets doesn't need the complete 
database, just a few fingerprints which work relatively reliably. They 
won't get everything, but they might get the critical stuff.


Haven't you noticed a pattern over the decades?

X-bit encryption would take a "super computer" (never actually 
identifying which one) N-years running flat out to crack.


A few years later

Y-bit encryption would take a "super computer" (never actually 
identifying which one) N-years running flat out to crack (without any 
mention of why they were/are wrong about X-bit).


Oh! You wanted "Why?" Sorry.

I get this list in digest form. Most of the time I don't read it. Only a 
tiny fraction of my life revolves around Qt and small systems. This 
whole security thing came up in another part of my world, then I 
actually read something here.


*nix did it wrong. No application should be allowed to open its own 
TCP/IP or network connection. No application should have any knowledge 
of transport layer security, certificates or anything else. Unisys and a 
few other "big systems" platforms are baking into their OS a Network 
Software Appliance. This allows system managers to dynamically change 
transport layer communications protocols on a whim. Not just transport 
layer security, but what network is in use, even non-TCP based things 
like Token Ring, DECNet, left-handed-monkey-wrench, etc.


All of that is well and good. It's how things should have been done to 
start with.


The fly in the ointment is developers using "human interpretable" data 
formats for transmission. Moving to a non-IP based network (meaning not 
running a different protocol on top of IP but running a completely 
different network protocol on machines which don't even have the IP 
stack software installed) can buy you a lot, but if you are a high value 
target and that network runs between data centers someone will 
eventually find a way to tap into it.


Even if that is not your point of penetration some people/developers 
store this human readable stuff on disk. My God, CouchDB actually stores 
JSON! Yeah, that's how you want to see someone storing a mass quantity 
of CC information along with answers to security questions and mother's 
maiden name.


My having to ponder all of this is how we got here.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest