Second part of reply to Joseph Gwinn...
On 12.04.22 00:02, Joseph Gwinn wrote:
> On 09.04.22 20:35, Lux, Jim wrote:
On 4/9/22 10:03 AM, [email protected] wrote:
Recently I was discussing some measurement results with my colleagues
as we're trying to come up with a low noise JFET which can
successfully be integrated into a SiGe BiCMOS process, and quite often
we're also struggling to identify why exactly variant A has
significantly lower noise than variant B, or why a new approach does
not improve noise the way it was expected.
So from a manufacturing process design point of view, achieving low
1/f noise indeed is closer to sheer dumb luck than the proverbial
"more art than science" suggest.
This is very, very true. Some manufacturers get very low noise or very
low leakage (or both), essentially by being "lucky". From what I've
been told, there's no good models, nor predictions - so people share
"lore" of "if you get these 2Nxxxx FETs from the mfr in England, they're
really good" until they aren't. There isn't enough market for these,
so I suspect research money to "solve the problem" isn't available.
Like all those microwave MMICs with low noise, they worry about 100 MHz
and up (if not 1GHz), they certainly don't worry (or control) for noise
at 5 MHz, or where the 1/f knee is. So just because you got good results
with a batch of them, the next batch might not. It's not even clear you
could come up with a standardized test method, because the noise depends
on a lot of other factors (drain current, for instance).
I bet (hope?) it isn't quite that bad.
But the fact that one cannot test and sort for 10-Hz flicker noise in
three milliseconds would suffice.
My experience is probably not representative of the semiconductor
industry as a whole, especially as unlike the big players we run a mix
of different processes on our line and are also have to deal with nearly
no redundancy in equipment instead of industry practice which boils down
to running one single process only per line if possible and having
several identical lines run in parallel, so that tool downtime on one
line can at best be compensated by using tools of the other identical lines.
But in my experience it's not uncommon to find that subsequent lots of
one particular product vary significantly in various parameters. Of
course these "subsequent" lots usually do not run back-to-back, so they
have a random combination of lots of different processes between them,
which pretty likely is at least part of the problem, but even if we run
several lots back-to-back, they never come out identical. Variation is
somewhat reduced on back-to-back lots, but still easily observable even
in non-critical parameters (non-critical in the sense of not reacting
much to changes in conditions).
Even though we're measuring noise only if there's a specific request for
it, we do occasionally see significant variation of 1/f corner frequency
even between nominally identical devices on the same wafer. Clearly, if
devices vary within one single wafer, you shouldn't expect to do better
from lot to lot with who knows what happened in between in the
processing line. And if you see significant variation within one wafer,
you essentially have to measure a significant portion of the devices on
the wafer to be sure your device sampling is still representative of the
wafer as a whole so you can reliably determine whether or not a wafer
meets your spec, driving up your test time.
But to put things into perspective: We do spend about 2 to 3 hours of
testing time per wafer (only parametric testing of DC and RF, not
including functional testing of customer devices) in total with
approximately 20% of the dies measured. Spending one more minute per die
for 1/f noise assuming we can get away with the same sampling scheme
would be very much manageable.
Bests,
Florian
_______________________________________________
time-nuts mailing list -- [email protected] -- To unsubscribe send an
email to [email protected]
To unsubscribe, go to and follow the instructions there.