Jump to content

Artificial Intelligence


maqroll

Recommended Posts

2 hours ago, mjmooney said:

Why is it that AI generated 'photographs' of people are so incredibly realistic, except for the hands? They always seen to have six or seven fingers - sometimes the only thing that gives it away as AI. I have no idea how the coding works, but I'd have thought it would be pretty easy to program it to know that hands have four fingers and a thumb. 

Easy, the developers are from Norwich.

  • Haha 3
Link to comment
Share on other sites

2 hours ago, mjmooney said:

Why is it that AI generated 'photographs' of people are so incredibly realistic, except for the hands? They always seen to have six or seven fingers - sometimes the only thing that gives it away as AI. I have no idea how the coding works, but I'd have thought it would be pretty easy to program it to know that hands have four fingers and a thumb. 

It's one of the things that reveal that this technology isn't really 'AI'.

The programmes are using analysis of billions of images to create things based on the prompt you give it. It doesn't 'understand' what the images are, just that they are what they are. If you ask one of them to create an image of a bird, you'll get something with a beak feathers and 2 wings, because the data it has of images it has of birds will show they always have a beak, feathers and 2 wings, but it doesn't understand anything about what those things do/why they are as they are. So you could get something that looks like an ostrich with hummingbird wings and an eagles head in macaw colours. It's got all the hallmarks of a bird, but it's not something that would actually exist.

With that thinking, you can see why it can't do hands. The programmes don't understand what a hand is, they just can see similarities between images of hands, but that's all. Which then causes a problem because images of people rarely focus on hands, and hands can look very different image to image because they move and make different shapes - a fist is very different looking to a hand waving, which is very different to a hand holding a glass, which is very different to a hand holding a pencil, which is different to pointing, which is different a thumbs up... As such the image of what a hand is is much more changeable than a face to the programme. It just 'knows' that hands are things at the end of arms that have a number of bumps at the end with a sightly different colored bit on the tips.

It's also the same reason it can't do teeth. Or why the text generating ones can produce very convincing sounding text but will have nonsense or meaningless stuff in it - it doesn't understand anything beyond surface because all its doing is collating the information it has to produce something that statistically is similar to things it relates to the things it's being asked to generate.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Chindie said:

It's one of the things that reveal that this technology isn't really 'AI'.

The programmes are using analysis of billions of images to create things based on the prompt you give it. It doesn't 'understand' what the images are, just that they are what they are. If you ask one of them to create an image of a bird, you'll get something with a beak feathers and 2 wings, because the data it has of images it has of birds will show they always have a beak, feathers and 2 wings, but it doesn't understand anything about what those things do/why they are as they are. So you could get something that looks like an ostrich with hummingbird wings and an eagles head in macaw colours. It's got all the hallmarks of a bird, but it's not something that would actually exist.

With that thinking, you can see why it can't do hands. The programmes don't understand what a hand is, they just can see similarities between images of hands, but that's all. Which then causes a problem because images of people rarely focus on hands, and hands can look very different image to image because they move and make different shapes - a fist is very different looking to a hand waving, which is very different to a hand holding a glass, which is very different to a hand holding a pencil, which is different to pointing, which is different a thumbs up... As such the image of what a hand is is much more changeable than a face to the programme. It just 'knows' that hands are things at the end of arms that have a number of bumps at the end with a sightly different colored bit on the tips.

It's also the same reason it can't do teeth. Or why the text generating ones can produce very convincing sounding text but will have nonsense or meaningless stuff in it - it doesn't understand anything beyond surface because all its doing is collating the information it has to produce something that statistically is similar to things it relates to the things it's being asked to generate.

Yes, that all makes sense. I expect they're already working on it, but it seems to me that the next logical step is to have a look at where it's going wrong, and 'teach' it. Show it lots and lots of very clearly defined hands and teeth, to feed into its algorithms. 

Link to comment
Share on other sites

  • 2 weeks later...
Quote

Elon Musk and others urge AI pause, citing 'risks to society'

 Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarising lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/

Link to comment
Share on other sites

https://en.wikipedia.org/wiki/Future_of_Life_Institute

Quote

The Future of Life Institute (FLI) is a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its advisors include entrepreneur Elon Musk.

I'm not sure if this is an heroic or creepy organisation.

https://futureoflife.org

Quote

 

The risks we focus on

We are currently concerned by four major risks. All four hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.

Artificial Intelligence

From recommender algorithms to self-driving cars, AI is changing our lives. As the impact of this technology magnifies, so will its risks.

Biotechnology

From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.

Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.

Climate Change

Likely the most well-known of our cause areas, climate change increases the likelihood of other catastrophic risks, such as pandemics or war, as well as posing many catastrophic threats on its own.

 

 

Link to comment
Share on other sites

I consider myself a reasonably well-read and well-educated teacher, who have worked with computers since CP/M and DOS 3.0 was the newest OSes available. I've played around a bit with ChatGPT 3 &4, and I'm not too impressed with what those virtual toddlers manage to come up with in fields where I have personal professional knowledge.

But moving on beyond these AI infants (as is bound to happen), I believe there absolute has to be an adult present in the room. Something akin to Isaac Asimov's Three Laws of Robotics.  If not, Frederic Brown's 1950 short story "Answer" will come into fruition sooner or later.

Link to comment
Share on other sites

  • 2 weeks later...

Just went through the same exercise:

The largest capital city by area is Moscow, the capital city of Russia. The total area of Moscow is 2,511 square kilometers (970 square miles), making it one of the largest cities in the world by land area.

Ankara is the capital city of Turkey and its area is 25,706 square kilometers (9,938 square miles). However, the actual urban area of Ankara is smaller, with a land area of around 2,516 square kilometers (971 square miles) and a population of approximately 5.6 million people.

Link to comment
Share on other sites

2 hours ago, fruitvilla said:

Just went through the same exercise:

The largest capital city by area is Moscow, the capital city of Russia. The total area of Moscow is 2,511 square kilometers (970 square miles), making it one of the largest cities in the world by land area.

Ankara is the capital city of Turkey and its area is 25,706 square kilometers (9,938 square miles). However, the actual urban area of Ankara is smaller, with a land area of around 2,516 square kilometers (971 square miles) and a population of approximately 5.6 million people.

Interestingly, Google Bard did much better than ChatGPT on this one. 

It's interesting that the three big players (ChatGPT, Bing and Bard) can offer wildly different answers.

Edited by Lichfield Dean
Link to comment
Share on other sites

  • 2 weeks later...
Quote

 

If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it.  

Both of these resources are only really available to big companies. And although some of the most exciting applications, such as OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to its vast data and computing resources. 

“A couple of big tech firms are poised to consolidate power through AI rather than democratize it,” says Sarah Myers West, managing director of the AI Now Institute, a research nonprofit. 

Right now, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the start of a new tech hype cycle, and that means lawmakers and regulators have a unique opportunity to ensure that the next decade of AI technology is more democratic and fair. 

What separates this tech boom from previous ones is that we have a better understanding of all the catastrophic ways AI can go awry. And regulators everywhere are paying close attention. 

China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which will require tech companies to be more transparent about how generative AI systems work. It’s also planning  a bill to make them liable for AI harms.

 

MIT Technology Review

Link to comment
Share on other sites

On 19/04/2023 at 09:39, leemond2008 said:

applying for a new job, Chat GPT has just wrote pretty much my entire CV for me 

The more I think about this, the more questions I have, really. 

Since it doesn't know you, presumably you still have to put in your job history, exam results, name, address, personal details etc. Which seems to me to be most of, if not all of the contents of a CV.

What exactly did the Chat GPT do? I'm just puzzled and curious.  

Link to comment
Share on other sites

5 minutes ago, HKP90 said:

The more I think about this, the more questions I have, really. 

Since it doesn't know you, presumably you still have to put in your job history, exam results, name, address, personal details etc. Which seems to me to be most of, if not all of the contents of a CV.

What exactly did the Chat GPT do? I'm just puzzled and curious.  

Just bullshitted like on a normal CV

  • Haha 1
Link to comment
Share on other sites

×
×
  • Create New...
Â