May 14, 2024

News Collective

Complete New Zealand News World

Google stops creating photos of people with the help of artificial intelligence after an error occurs

Google stops creating photos of people with the help of artificial intelligence after an error occurs

Google announced on Thursday that it had stopped the service of creating images of people using artificial intelligence, after the program depicted Nazi-era soldiers as people of different ethnic backgrounds.

The American technology giant launched a new improved version of its Gemini program on February 8 in some countries, but has now admitted that it will have to “fix recent issues” with the photography function.

“Come here…

Google announced on Thursday that it had stopped the service of creating images of people using artificial intelligence, after the program depicted Nazi-era soldiers as people of different ethnic backgrounds.

The American technology giant launched a new improved version of its Gemini program on February 8 in some countries, but has now admitted that it will have to “fix recent issues” with the photography function.

“We have stopped photographing people and will re-release an improved version soon,” the company said in a statement.

2 days ago User

The AI ​​produced four images of soldiers: one white, one black, and two women of color, according to an X user named John L.

Advertisement – Scroll to continue

Technology companies see artificial intelligence as the future for everything from search engines to smartphone cameras.

But AI programs, and not just those produced by Google, have been widely criticized for perpetuating racial bias in their results.

“@GoogleAI has an additional diversity mechanism that no one has designed or tested well,” John L. wrote on X.

Advertisement – Scroll to continue

Big tech companies are often accused of launching AI products before properly testing them.

See also  A very accurate way to simulate black holes in your laboratory

Google has a strong track record in launching artificial intelligence products.

The company apologized a year ago after an ad for its newly launched Bard chatbot showed the software incorrectly answering a basic question about astronomy.