Author Information
Abstract
Background: Artificial intelligence (AI) technologies have the potential to transform many aspects of patient care and is playing an increasing role in health care in diagnostics and patient management. Several studies have already demonstrated that AI can perform as well as or better than humans at key healthcare tasks. The aim of this study was to evaluate and compare the utility of the AI chatbots ChatGPT and Deepseek to answer everyday chemical pathology queries as handled by the registrars in the Chemical Pathology Department at Inkosi Albert Luthuli Central Hospital in Durban, South Africa. Method: All queries received by the registrars over a month period were documented and answers from traditional sources such as laboratory procedures and sample handbooks were documented. The queries were later asked to ChatGPT and Deepseek and further evaluated and the answers evaluated by two blinded senior pathologists in terms of suitability and accuracy of responses and key word recognition. Results: A total of 37 queries were asked to the chatbots. Based on average scoring of the two reviewers 97% (n = 36/37) of queries for ChatGPT responses and 73% (n = 27) for Deepseek were ranked 3 and above for suitability and accuracy. The poorly scoring responses from both Deepseek and ChatGPT related to questions that were very specific to the local laboratory or testing in the laboratory. Conclusion: A longer-term evaluation and verification of ChatGPT as a resource to assist with lab related queries is required. It may be a useful resource not only to the trainee chemical pathology registrar but to under-resourced health care settings where pathologist support may not be present.
Keywords
References

This work is licensed under a This work is licensed under a Creative Commons Attribution 4.0 International License.