Apologies to everyone for injecting email order chaos into the way I'm replying to messages on this list. Will try to improve.
Anyway, great points, Arzak, especially these. -Kate > While NGOs play a critical role in advocating for AI safety and ethical > considerations, their success is tempered by the fast pace of technological > development and the powerful commercial interests driving AI innovation > with very little opportunities and resources for participation in AI > development especially for organizations from Global South. > > > Arzak Khan > Founder > (I4C) Center for Artificial Intelligence and Human Rights > www.i4caihr.net > X:@arzakkhan > > > On Tue, 18 Jun 2024 at 5:20 AM, Kate Krauss <[email protected]> wrote: > >> So OpenAI has a conflicted mission, a weak board, an insanely risky goal, >> and no accountability (am I missing something?). Oh right, their product is >> evolving at a million miles an hour. >> >> They've shed many of the staff and board members who cared most about >> safety. >> >> Microsoft, their funder, could reign them in but it is motivated instead >> to egg them on. And now they've got a board member with very close ties to >> two US presidents and one of the world's most powerful spy agencies. The >> keys are on the table, as Juan Benet would say. >> >> I don't think OpenAI could be getting more press coverage--the coverage >> has been near-constant and pretty responsible. >> >> Are the NGOs working on this having any luck? >> >> -Kate >> >> >> On Sun, Jun 16, 2024 at 12:27 PM Andrés Leopoldo Pacheco Sanfuentes < >> [email protected]> wrote: >> >>> Sorry but “accountability” runs afoul of profit so many times, and the >>> “mission” of OpenAI is DoubleSpeak: >>> >>> OpenAI is an AI research and deployment company. Our mission is to >>> ensure that artificial general intelligence benefits all of humanity. >>> >>> Regards / Saludos / Grato >>> >>> Andrés Leopoldo Pacheco Sanfuentes >>> Pronouns: He/Him/They/Them (equal preference) >>> >>> On Jun 16, 2024, at 10:52 AM, Kate Krauss <[email protected]> wrote: >>> >>> >>> Hi, >>> >>> There is currently no accountability for the decisions at OpenAI, to my >>> knowledge. What has to happen for that to change? The board is not working. >>> >>> How can the company be held accountable? I'm especially interested in >>> the thoughts of policy people and lawyers on this list. And yes, choosing >>> a spy chief for the board is a big red flag. >>> >>> Sincerely, >>> >>> Kate >>> >>> On Sat, Jun 15, 2024 at 12:16 AM Sawsan Gad <[email protected]> wrote: >>> >>>> Hello friends — >>>> >>>> I was so happy when Liberationtech was resurrected, and of course a >>>> former head of NSA on AI is something that needs to covered and discussed. >>>> >>>> However, I hope we’re not quickly degenerating into Trump-this >>>> Trump-that (and sensationalizing the title, only to realize the guy “was >>>> asked to continue under Biden” buried deep down inside). (!) >>>> >>>> Journalists may need to do this kind of (… work..?) to keep their jobs >>>> — god knows for how long. Normal people, not so much. >>>> >>>> People are working very hard to restore a basic level of trust among >>>> family and friends, after the several political and civil abuses of the >>>> last few years. Let’s please keep good spirits and stay relevant on the >>>> things that we all care about, and not assume political leanings of others, >>>> and that magic words will evoke certain reactions à la Pavlov. >>>> >>>> Now, back to discussing OpenAI. :) >>>> (Sorry Kate if that’s too forward. All respect to you, thank you for >>>> sharing the article). >>>> >>>> Sawsan Gad >>>> PhD student - Geoinformatics >>>> George Mason University >>>> >>>> >>>> On Fri, Jun 14, 2024 at 8:05 PM Kate Krauss <[email protected]> wrote: >>>> >>>>> Sam Altman, one of AI's most important leaders--at least for now--is a >>>>> man with incredible contacts, wonderful social skills, and apparently few >>>>> scruples. Appointing the former head of the NSA to OpenAI's board >>>>> demonstrates that this company is unaccountable. This company puts >>>>> Americans--and everybody else in the world--at risk. >>>>> >>>>> How can OpenAI be made accountable? The stakes are so high. Its board >>>>> has already failed to contain it. >>>>> >>>>> Not even the worst part of this, but new board member Nakasone's hobby >>>>> horse is that the US must out-compete China in generative AI. >>>>> >>>>> -Kate >>>>> >>>>> ps: What happens at OpenAI if Trump is re-elected? >>>>> >>>>> >>>>> >>>>> >>>>> *Washington Post:OpenAI adds Trump-appointed former NSA director to >>>>> its board* >>>>> Paul M. Nakasone joins OpenAI’s board following a dramatic shakeup, as >>>>> a tough regulatory environment pushes tech companies to board members with >>>>> military expertise. >>>>> >>>>> By Cat Zakrzewski and Gerrit De Vynck >>>>> Updated June 14, 2024 at 12:16 p.m. EDT|Published June 13, 2024 at >>>>> 5:00 p.m. ED >>>>> >>>>> The board appointment of retired Army Gen. Paul M. Nakasone comes as >>>>> OpenAI tries to quell criticism of its security practices. (Ricky >>>>> Carioti/The Washington Po >>>>> OpenAI has tapped former U.S. Army general and National Security >>>>> Agency director Paul M. Nakasone to join its board of directors, the >>>>> continuation of a reshuffling spurred by CEO Sam Altman’s temporary >>>>> ousting in November. >>>>> >>>>> Nakasone, a Trump appointee who took over the NSA in 2018 and was >>>>> asked to continue in the role under Biden, will join the OpenAI board’s >>>>> Safety and Security Committee, which the company stood up in late May to >>>>> evaluate and improve its policies to test models and curb abuse. >>>>> >>>>> The appointment of the career Army officer, who was the >>>>> longest-serving leader of U.S. Cybercom, comes as OpenAI tries to quell >>>>> criticism of its security practices — including from some of the company’s >>>>> current and former employees who allege the ChatGPT-maker prioritizes >>>>> profits over the safety of its products. The company is under increasing >>>>> scrutiny following the exodus of several key employees and a public >>>>> letter that called for sweeping changes to its practices. >>>>> >>>>> “OpenAI occupies a unique role, facing cyber threats while pioneering >>>>> transformative technology that could revolutionize how institutions combat >>>>> them," Nakasone told the Post in a statement. "I am looking forward to >>>>> supporting the company in safeguarding its innovations while leveraging >>>>> them to benefit society at large.” >>>>> >>>>> Amid the public backlash, OpenAI has said it is hiring more security >>>>> engineers and increasing transparency about its approach to securing the >>>>> systems that power its research. Last week, a former employee, Leopold >>>>> Aschenbrenner, said on a podcast that he had written a memo to OpenAI’s >>>>> board last year because he felt the company’s security was “egregiously >>>>> insufficient” to stop a foreign government from taking control of its >>>>> technology by hacking. >>>>> >>>>> Security researchers have also pointed out that chatbots are >>>>> vulnerable to “prompt injection” attacks, in which hackers can break in to >>>>> a company’s computer system through a chatbot that is hooked up to its >>>>> internal databases. Some companies also ban their employees from using >>>>> ChatGPT out of concern that OpenAI may not be able to properly protect >>>>> sensitive information fed into its chatbot. >>>>> >>>>> Nakasone joins OpenAI’s board following a dramatic board shake-up. >>>>> Amid a tougher regulatory environment and increased efforts to digitize >>>>> government and military services, tech companies are increasingly seeking >>>>> board members with military expertise. Amazon’s board includes Keith >>>>> Alexander, who was previously the commander of U.S. Cyber Command and the >>>>> director of the NSA. Google Public Sector, a division of the company that >>>>> focuses on selling cloud services to governments, also has retired >>>>> generals >>>>> on its board. (Amazon founder Jeff Bezos owns The Washington Post.) >>>>> >>>>> Until January, OpenAI had a ban on the use of its products for >>>>> “military and warfare.” The company says the prohibition was removed to >>>>> allow for military uses that align with its values, including disaster >>>>> relief and support for veterans. >>>>> “Our policies have consistently prohibited the use of our tools >>>>> including our API and ChatGPT to ‘develop or use weapons, injure others or >>>>> destroy property,’” OpenAI spokesperson Liz Bourgeois said. “That has not >>>>> changed.” Nakasone did not respond to a request for comment. >>>>> >>>>> Nakasone brings deep Washington experience to the board, as the >>>>> company tries to build a more sophisticated government relations strategy >>>>> and push the message to policymakers that U.S. AI companies are a bulwark >>>>> against China. >>>>> “We want to make sure that American companies ... have the lead in the >>>>> innovation of this technology, I think the disruptive technology of this >>>>> century,” Nakasone said when asked about AI during a recent Post Live >>>>> interview. >>>>> >>>>> -- >>>>> -- >>>>> Liberationtech is public & archives are searchable. List rules: >>>>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to >>>>> digest mode, or change password by emailing >>>>> [email protected]. >>>> >>>> >>>>> -- >>> Liberationtech is public & archives are searchable. List rules: >>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to >>> digest mode, or change password by emailing >>> [email protected]. >>> >>> -- >> Liberationtech is public & archives are searchable. List rules: >> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to >> digest mode, or change password by emailing >> [email protected]. >> >
-- Liberationtech is public & archives are searchable. List rules: https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to digest mode, or change password by emailing [email protected].
