Il giorno mer 19 lug 2023 alle ore 10:44 Guido Vetere
<[email protected]> ha scritto:
>
> un piccolo commento a caldo dopo aver dato una scorsa a questo illuminante 
> intervento
> noi diamo per scontato che i LLM non possano che essere ciò che oggi ci viene 
> proposto dal dupolio Microsoft \ Google

Non mi pare esista al momento un duopolio riguardo gli LLM: ne
esistono decine di completamente open source, prodotti un po' in tutto
il mondo. Anche il Technology Innovation Institute dell'Arabia Saudita
ha rilasciato un LLM come Open Source: https://falconllm.tii.ae/

> ma se ci pensiamo un attimo, questa necessità non esiste: forse si tratta 
> solo di un'illusione propagandistica
> perché dobbiamo dare per scontato che chiunque sulla faccia della terra abbia 
> bisogno di generare testo in qualsiasi lingua?

Esistono centinaia di lingue sulla faccia della terra, mi pare sia
abbastanza scontato che chiunque possa avere il diritto di generare
testo nella propria lingua.

> a me ad esempio un LLM in italiano e inglese andrebbe più che bene, dunque 
> sono sicuro che la maggior parte dei millemila miliardi di parametri di GPT4 
> non li userò mai (pur pagandoli)

La capacità generativa di un LLM in una specifica lingua non dipende
solo dal training effettuato su quella specifica lingua, ma anche da
tutte le altre lingue. I concetti appresi in ognuna delle lingue su
cui il LMM è stato allenato entrano a far parte della stessa rete
neurale, e di conseguenza contribuiscono alla generazione.

Non è molto diverso da quello che accade quando io leggo qualcosa in
inglese e poi uso ciò che ho appreso da quella lettura per scrivere un
saggio in italiano.

> il fatto che lo stesso LLM debba servirmi per la generazione di testo e di 
> software è anche abbastanza strano: sarei ben disposto a ricorrere a 
> piattaforme diverse
> insomma io vedo grandi possibilità di downsizing "by task"

Un training "generalista" consente al LLM di poter fare collegamenti
interdisciplinari. È già previsto che, successivamente al training
generalista, si possa poi proseguire con un training "verticale" nel
settore preferito, eventualmente "alleggerendo" il modello di parti
non rilevanti in modo da aumentarne la velocità durante le inferenze e
diminuirne i consumi.
> e qui viene la pregnanza di ciò che Acemoglu dice: dovremmo, con le politiche 
> pubbliche, incentivare uno sviluppo che vada in una diversa direzione, non 
> solo cercare di vincolare la direzione monopolistica attuale
> di fatto, nella ricerca di nuovi modelli, già si muovono diverse realtà 
> pubbliche e private
> G.
>
>
>
>
> On Wed, 19 Jul 2023 at 10:12, Daniela Tafani <[email protected]> wrote:
>>
>> Daron Acemoglu
>> Thank you, Cristina, for that wonderful introduction and to you and Tommaso 
>> for inviting me. […]
>> As Cristina said, I'm going to talk about something that's partly inspired 
>> by my book. AI and
>> antitrust in 10 minutes. So that's a tall order especially if I try to blend 
>> in ideas from the book, so
>> let me jump into it. I'm going to do 10 question and answers in 10 minutes, 
>> but since that's a very
>> short time I'll just give you the answers. I'll let your imagination do the 
>> job of the what the questions
>> might have been to which. These are the answers.
>> - Yes, generative AI has great potential, so I am completely convinced that 
>> this is a very
>> interesting technology that can bring lots of goods and has capabilities so 
>> we can build on
>> that. But I think let's move forward.
>> - And yes, I believe that monopoly is everywhere in the tech sector. So 
>> here, perhaps I differ
>> from many IO economists, and I subscribe to the duck test. If something 
>> looks like a duck,
>> walks like a duck, and quacks like a duck, it is a duck. So if you have 
>> companies that have
>> reached sizes that have never been in human history, and that dominate a 
>> particular line
>> of business, they are monopolies. So we have to grapple with that. And that 
>> means all sorts
>> of regulatory tools have to be considered, including antitrust. So this is 
>> absolutely on
>> target.
>> - And yes, in my view this is getting worse with foundation models. Because 
>> there is a
>> likelihood that we may go towards a duopoly. With Microsoft Open AI and 
>> Google as the
>> two key players, even though open source and many other competitors are 
>> going to try to
>> get into foundation models, but the current business model of foundation 
>> models is very
>> resource intensive. So that raises the possibility, does not in any way 
>> creates a certainty,
>> but it raises the possibility. These two companies and their models are 
>> going to be the
>> dominant ones on which many others will have to build, raising all of the 
>> issues of vertical
>> product creation and all sorts of other questions that are going to be 
>> central for
>> policymakers and economists to grapple with.
>> - But no, I actually don't think monopoly power leading to high prices is 
>> the main problem
>> that we're dealing with. You know, of course, that is a problem. But if the 
>> only issue was
>> that because the foundation models are controlled by, you know, Google and 
>> Microsoft,
>> they’re going to charge higher prices, and as a result, the apps that are 
>> developed on them
>> are going to be more expensive, that would be, of course a pity and it's 
>> something we can
>> do something about, but it wouldn't be the end of the world. So we get many 
>> new apps.
>> They cost a little bit more. We don't get quite the consumer surplus. Woe is 
>> us, but not the
>> end of the world. The problem is the direction of innovation. The problem is 
>> that the current
>> market structure is selecting a particular direction of innovation. And that 
>> has much more
>> sweeping consequences. Taking the set of products and technologies as given 
>> and pricing
>> them above marginal cost and thus losing some of the welfare triangle is not 
>> the main issue.
>> There’s the potential for doing much greater damage. No, this is not because 
>> of existential
>> risk. In fact, like Cristina was implying, when all of these tech leaders 
>> are talking about
>> existential risk I see it as either a blind spot or a ploy for making us not 
>> worry about the
>> bigger risks. The bigger risks in my mind are in the labour market. Most of 
>> us earn our living
>> in the labour market, so what happens to jobs is the most important issue. 
>> And the current
>> direction of AI looks like it is going to follow some of the trends we have 
>> seen with digital
>> technologies before. Failing to create the complementarities which human 
>> workers and
>> skills, and instead going much toward much more towards automation, hence 
>> generating
>> inequality, potential job losses, especially for workers without very 
>> specialised skills such
>> as those with postgraduate degree.
>> - And no, it's not just economics. There is a real danger here that the 
>> current direction of
>> generative AI could again continue existing trends that we saw in social 
>> media degrade
>> political conversations. Increase the amount of misinformation and 
>> disinformation, with a
>> much more powerful tool. Create a particular type of ecosystem in online 
>> forums where
>> people are drawn on the basis of emotion rather than engagement. And hence 
>> generally
>> act towards the exploitation of people in their capacities and duties as 
>> democratic citizen.
>> It is this twin: Inequality and Elimination of good jobs in the labour 
>> market, as well as
>> erosion of democratic capacity that I think are most problematic.
>> - No, I am not a Luddite. So I am not saying that this is in the nature of 
>> technology, nor that
>> we should oppose technological change. The issue here is that we are not 
>> along the right
>> path. What's great about technology in general and generative AI in 
>> particular, that it's a
>> very highly malleable type of technological platform or what some economic 
>> historians
>> used to call general purpose technology, meaning that you can use it for 
>> creating many
>> apps, many different types of sub technologies and many different directions 
>> are possible.
>> It is not complete idle talk. When people used to talk about social media 
>> and other online
>> tools creating new democratic spaces today, it looks like very naive. When 
>> people in the
>> 2000s said oh, online communication and social media are going to democratise
>> communication. But that potential was there, and that potential is much 
>> greater with AI.
>> When some people in the tech industry talk about generative AI being useful 
>> to humans in
>> terms of getting better information, performing better tasks so generative 
>> AI… actually, I
>> think the great potential that I mentioned at the beginning is precisely in 
>> being a human
>> complementary technology. The tragedy of our current age is that we have 
>> almost all
>> information that is at least codified available in some form. But we do not 
>> have the
>> processing power to decide which one we should retrieve, how we should 
>> interpret how we
>> should process, and which types of information we should engage with in 
>> different forms.
>> Generative AI has the capability to improve human interaction with 
>> information and hence
>> generate a lot of tasks, not just for knowledge workers, but for 
>> electricians, for carpenters,
>> for educators, for healthcare workers. So that possibility is there.
>> - But no, we are not going in the right direction. So we do need a 
>> redirection of technological
>> change.
>> - And no. I don't think it is naive or unrealistic to think about the 
>> redirection of technological
>> change. One view which is common among some economists and some tech leaders 
>> goes
>> back to either to the view that technology somehow has a preordained path 
>> and we just
>> have to follow it. No, I am denying that and I think history is quite a good 
>> guide on showing
>> how malleable technology is. Goes back to a saying by Ferdinand de Lesseps 
>> of fame, from
>> the Suez and the Panama Canal, which we discussed in this book where he 
>> said, don't worry,
>> men of genius would arise and solve all problems. Who are today's men of 
>> genius. Maybe
>> some Altman or Elon Musk. But no, I don't think we should trust them. So I 
>> think the
>> direction of technology is malleable, but it's also a societal choice.
>> - And yes, as Cristina was hinting, antitrust has a very important role in 
>> this. For two reasons.
>> One, because if we want alternatives. They are not very likely to come from 
>> a duopolistic
>> or highly oligopolistic structure, especially one further empowered by 
>> killer acquisitions
>> and these companies being a block other types of technologies that do not 
>> fit well with
>> their business model. So if we want more alternatives that go more in a human
>> complementary direction or more pro democratic direction, or create a more 
>> open
>> competitive environment, I think we have to use antitrust tools including 
>> potential breakup
>> of the largest companies which are too big and one other reason is because 
>> this type of
>> power comes with enormous social power. And by that social power, I mean 
>> economic
>> power, and also general social power. Tech companies have an enormous sway 
>> on public
>> opinion, which I think is associated with their mega profits. And again, I 
>> don't think that
>> creates a healthy environment.
>> - And no, finally, I don't think antitrust is the main tool as Cristina was 
>> also hinting in her
>> introductory comment. I think antitrust is a very blunt tool and I think for 
>> the redirection
>> of technology we need a suite of tools which should include exactly how data 
>> is used and
>> accessed. We need a new interoperability type of approach as well as how do 
>> we
>> compensate and how do we actually encourage more creation of creative data. 
>> We also
>> need to provide explicit incentives, such as, for example, those that have 
>> been successful
>> in the field of renewable technology, where we encourage more of the 
>> socially valuable
>> types of technologies. And I think there are a number of tools for that and 
>> we may also
>> need more tax policies to discourage the worst types of business models and 
>> create room
>> for alternative business models. So I am actually quite favourable, although 
>> I think much
>> more study is needed, to a digital ad tax that creates more openness for 
>> alternative
>> business models based on things like Wikipedia or subscription models in the 
>> online space.
>> - Finally, I think we also need to rethink other tools that we have, like 
>> fiscal tools that
>> currently create a very asymmetric playing field between capital and labour 
>> and going back
>> to the job market and labour market inequality issues. I think equating 
>> marginal tax rate
>> between capital and labour are things that we should definitely revisit. 
>> Thank you.
>>
>> https://mailchi.mp/cepr/central-bank-communication-rpn-seminar-series-516707
>>
>> _______________________________________________
>> nexa mailing list
>> [email protected]
>> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
>
> _______________________________________________
> nexa mailing list
> [email protected]
> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to