Sunday, April 21, 2024

Distrust in AI: Questioning Its Intentions


In the late eighties, when I embarked on my career in the computer profession, I delved into developing commercial applications in Fortran, COBOL, and tabulation software like Lotus. During those times, there was a prevailing belief that anything generated by a computer was an infallible truth held in high regard by people. This blind trust in automation created opportunities for manipulation, particularly in the corporate realm, where cunning finance and marketing professionals would often tweak numbers and results to align with their projections. They operated under the assumption that the inherent trust in automation would shield them from scrutiny by banks, financial institutions, and government agencies, who seldom delved into the intricacies of manually tabulating data to uncover discrepancies.

Before the advent of AI, such manipulative tactics were confined to a select few unscrupulous elements. Overall, there was a prevailing faith in technology's capacity to enhance various aspects of life, notwithstanding occasional concerns about job displacement. However, with the emergence of AI, this dynamic has undergone a dramatic shift. As I highlighted in my previous communication, contemporary scepticism toward technology is unprecedented. For the first time in machine technology and automation history, people harbour doubts regarding the outcomes produced by these systems and question their underlying intentions. There is pervasive uncertainty regarding whether AI-driven outcomes are conducive to progress or detrimental to established cultural values and ethical norms. Many have already voiced concerns about AI's propensity for plagiarism, misinformation, and erosion of human dignity in its current form.

This unsettling development represents a tragic deviation from the role of science and technology in facilitating human advancement, both materially and qualitatively. Left unchecked, this growing distrust threatens to undermine the significance of entrepreneurship and business in human society. Similar to the regulation of nuclear energy, there is a pressing need for robust international enforcement mechanisms to ensure that AI adheres to fundamental human values, ethics, and legal obligations. Such measures would reassure individuals that AI technologies will not jeopardize human capital or cultural integrity.

Without a comprehensive regulatory framework, the unchecked proliferation of AI poses a significant risk of undoing the progress achieved by industry and business. Restoring trust in technology is imperative, as its erosion poses a grave threat to the fabric of society and the well-being of individuals worldwide.

No comments:

Post a Comment