Artificial intelligence (AI) is rapidly becoming a key driver of organizational innovation. Its swift adoption has accelerated the development of broadly applicable ethical frameworks. As a result, significant questions regarding responsibility, fairness, and sustainability have emerged. This paper analyzes the ethical governance of AI-driven innovation through a systematic literature review of 36 peer-reviewed articles published between 2016 and 2025. The study identifies five main ethical challenges across different sectors and regions: algorithmic bias, transparency, data protection, responsibility, and sustainability. It highlights notable differences in ethical priorities between Western and non-Western perspectives, particularly within sectors such as healthcare, education, and human resources. Moreover, the research reveals a modest but growing integration of ethical AI principles with Environmental, Social, and Governance (ESG) models. This indicates both conceptual alignment and operational gaps. While ESG offers a promising framework for embedding ethical standards into innovation ecosystems, its practical implementation remains inconsistent. This work provides a multi-dimensional mapping of the ethical landscape in AI innovation and offers strategic recommendations to align technological advancement with sustainable, human-centered governance.



