AI warnings; US senators investigate possible threats
International News News update

AI warnings; US senators investigate possible threats

Richard Blumental, A USA senate democrat, raised a question in the parliament about Chat GPT / GPT 4 and other artificial intelligence tools, how they could be beneficial and worse for the security of an individual person, and a whole country.

 

AI warnings; US senators investigate possible threats;

 

AI warnings; US senators investigate possible threats Richard Blumental asked this question first of all from the artificial intelligence CEO That;

 

“I alluded, in my opening remarks to the jobs issue, the economic effects on employment. I think you have said, in fact, and I’m going to quote development of superhuman machine intelligence as probably the greatest threat to the continued existence of humanity, 

 

end quote;

You may have had in mind the effect on jobs, which is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is and whether you share that concern.”




SAM ALTMAN; ( OpenAI CEO)



“Like with all technological revolutions, I expect there to be a significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the other side of a previous technological revolution, talking about the jobs that exist on the other side, you can go back and read books about this. It’s what people said at the time. It’s difficult. I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better. I think it’s important first of all, I think it’s important to understand and think about GPT Four as a tool, not a creature, which is easy to get confused. 

 

And it’s a tool that people have a great deal of control over and how they use it. And second, GPT Four and other systems like it are good at doing tasks, not jobs. And so you already see people that are using GPT Four to do their job much more efficiently by helping them with tasks. 

 

Now, GBT Four will, I think, entirely automate away some jobs and it will create new ones that we believe will be much better. This happens again. My understanding of the history of technology is one long technological revolution, not a bunch of different ones put together. 

 

But this has been continually happening. We, as our quality of life raises, and as machines and tools that we create can help us live better lives. The bar raises for what we do and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. 

 

So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by the government to figure out how we want to mitigate that. 

 

But I’m very optimistic about how great the jobs of the future will be.” 

 

Ms. Montgomery and Professor Marcus 



Let see the reaction to these questions of GPT 4 from Ms. Montgomery and Professor Marcus 

 

Ms. Cristina Montgomery;

 

Well, it’s a hugely important question, and it’s one that we’ve been talking about for a really long time at IBM. We do believe that AI, and we’ve said it for a long time, is going to change every job. 

 

New jobs will be created, many more jobs will be transformed, and some jobs will transition away. I’m a personal example of a job that didn’t exist when I joined IBM, and I have a team of AI governance professionals who are in new roles that we created as early as three years ago. 

 

I mean, they’re new and they’re growing. So I think the most important thing that we could be doing, and can and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. 

 

And we’ve been very involved for years now in doing that, in focusing on skills based hiring, in educating for the skills of the future. Our skills build platform has 7 million learners and over 1000 courses worldwide focused on skills. 

 

And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today. 

 

Professor Marcus 

 

On the subject of nutrition labels, I think we absolutely need to do that. 

 

I think that there are some technical challenges and that building proper nutrition labels goes hand in hand with transparency. The biggest scientific challenge in understanding these models is how they generalize, what they memorize and what new things they do. 

 

The more that there’s in the data set, for example, the thing that you want to test accuracy on, the less you can get a proper read on that. So it’s important, first of all, that scientists be part of that process, and second, that we have much greater transparency about what actually goes into these systems. 

 

If we don’t know what’s in them, then we don’t know exactly how well they’re doing when we give something new. And we don’t know how good a benchmark that will be for something that’s entirely novel. So I could go into that more, but I want to flag that. 

 

Second is on jobs, past performance history is not a guarantee of the future. It has always been the case in the past that we have had more jobs, that new jobs, new professions come in as new technologies come in. 

 

I think this one’s going to be different. And the real question is over what timescale is it going to be ten years? Is it going to be 100 years? And I don’t think anybody knows the answer to that question.”




Richard Blumental,( US senate democrat) again argues that “ The words were not mine. It and the audio was an AI voice cloning software trained on my 4 speeches.” 

 

Mr. Kennedy Remarked over that “we are afraid AI could be used to destroy faith in government and other institutions.”

 

SAM ALTMAN; ( OpenAI CEO)

 

In the answer, openAI CEO Sam Altman said, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. 

Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. 

 

some other set in favor of AI technology that;

 

We need to ensure that our politicians understand this technology, and currently, most of them do not understand technology. A recognition that AI’s impact could be much more than shattering glass for fun. 



Leave a Reply

Your email address will not be published. Required fields are marked *