Artificial intelligence competed with doctors! The result was not surprising.

A study conducted by Cambridge University revealed the extent to which artificial intelligence can be effective in the healthcare sector.

In recent years, the rapid development of artificial intelligence technology has brought significant changes to our daily lives. Even the smartphone applications that fit into our pockets and we carry with us constantly are now equipped with artificial intelligence technologies. On the other hand, the healthcare sector is also aiming to benefit from the opportunities offered by artificial intelligence. Finally, an exciting development has occurred in this regard. A study was conducted using popular language models.

Artificial intelligence took an exam related to healthcare!

In a study conducted by the prestigious Cambridge University School of Medicine in the United Kingdom, a exam was prepared related to ophthalmology, which focuses on eye and vision system diseases.

In this exam, developed by OpenAI’s GPT-4 and GPT-3.5, Google’s PaLM 2, and Meta’s LLaMA artificial intelligence language models, five expert ophthalmologists, three ophthalmology trainees, and two junior non-specialist doctors participated. The exam consisted of 87 multiple-choice questions taken from textbooks used in ophthalmology education. Subsequently, both doctors and artificial intelligence models answered these questions. The result was as expected.

GPT-4 outperformed all competitors by correctly answering 60 out of 87 questions. The average correct answer rate for expert ophthalmologists was recorded as 66.4. Other artificial intelligence language models, GPT-3.5, PaLM 2, and LLaMA, answered 42, 49, and 28 questions correctly, respectively.

While it’s evident that GPT-4 provided more correct answers than the doctors, it doesn’t necessarily mean that artificial intelligence is fully adequate. Researchers have also cautioned about this.

Scroll to Top