The opportunities - and dangers - of artificial intelligence are currently firmly under the microscope, with the fast-developing technology set to transform huge aspects of society.
The rapid advances have sparked fears over jobs, privacy and the spread of false information.
While research on AI has been going on for years, the sudden popularity of generative applications such as ChatGPT and Midjourney has highlighted a technology that could upend the way businesses and society operate.
The advances have now been used by Luca Allievi, 33, to create realistic looking - though clearly fake - images of the late Queen Elizabeth in unusual scenarios - like dancing and DJing.
Allievi manufactured the shots with the help of his wife Anna, 33, using the Midjourney software.
Luca, a biotechnologist from Milan, said: “The images are really realistic. You need to use the right words in the prompt.
“The order of the words is important. You have to describe the person and adjust the prompts as the software evolves. It’s evolving so fast.”
The AI pictures of the Queen highlight how difficult it is for users to determine what is real and what is fake - though, in this case, it’s obvious they are not genuine.
What is AI and why are people worried about it?
The release of several impressive new generative AI softwares in the last year has captured the public imagination.
Midjourney, for example, has allowed users to write prompts and see a computer generate a realistic-looking image in just seconds of practically whatever they ask for.
ChatGPT and similar programmes have allowed users to generate poetry, convincing-sounding university essays, song lyrics and much more.
But there are concerns over copyright and privacy as the technology uses public data.
There are also concerns about AI's impact on jobs, especially the white-collar roles which have been less impacted by automation in the past.
The Competition and Markets Authority (CMA) has opened a probe into AI amid concerns that the technology might impact consumers.
Are there regulations on AI?
AI's quick rise is also complicating the effort by countries to agree laws governing the use of the technology.
Governments around the world are trying to find a balance whereby they can assess and rein in some of the potential negative consequences of AI without stifling innovation.
In March, the UK opted to split regulatory responsibility for AI between those bodies that oversee human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.
The CMA said it would seek to understand how foundation models that use large amounts of unlabelled data were developing.
The review in Britain echoes investigations taking place around the world, from China to the EU and the United States.
The CEO of OpenAI, the startup behind ChatGPT, told a US Senate panel on Tuesday that using artificial intelligence to interfere with election integrity is a “significant area of concern”, adding that it needs regulation.