In the midst of the 2024 US election campaign, a deepfake video falsely alleging election fraud has proliferated across social media platforms. In healthcare, biased data has skewed AI results, putting patient care at risk. Opaque algorithms have destabilized markets, obscured decision-making processes and caused a loss of confidence in financial systems. Thus, the risks associated with AI are becoming increasingly evident and its shortcomings are leading to a loss of public trust.
Charles Adkins, CEO of the HBAR Foundation and former President of Hedera Hashgraph, LLC, advocates for a governance system that ensures AI serves humanity rather than causes harm. However, the scope and complexity of AI development is beyond human capabilities. This is where distributed ledger technology (DLT) comes into play. This decentralized system records and verifies data across multiple nodes and, in doing so, brings transparency, accountability and integrity to AI. This promotes trust, prevents monopolistic control and encourages ethical innovation.
One of the main problems with AI is its tendency to operate like a black box, with secret data obscuring the decision-making process. This lack of transparency can be particularly detrimental in sectors such as healthcare and finance, where transparency is paramount. DLT changes this by recording all data and updates on an immutable ledger, ensuring that all changes are traceable.
ProveAI is an example of a platform that uses DLT to secure and track AI training data and updates, ensuring compliance with ethical standards and relevant regulations such as the EU AI Act. This approach empowers AI models and lays the foundation for trust and fairness in their results.
However, poor data quality is a persistent problem in AI development. A 2024 survey by Precisely found that 64% of companies find AI unreliable due to unverified or biased data. By attaching real-time data to decentralized networks, DLT ensures that data is accurate, transparent and immutable.
Platforms like Fetch.ai and Ocean Protocol are already demonstrating the potential of this innovation. Fetch.ai uses Oracle to access external data in real time, optimizing logistics and energy efficiency in the Web3 ecosystem. Ocean Protocol facilitates the secure sharing of tokenized data, allowing AI systems to access high-quality datasets while protecting user privacy.
These platforms play a decisive role in the fight against disinformation, particularly in the context of deepfakes. Ofcom recently revealed that 43% of people aged 16 and over have encountered at least one deepfake online in the first half of 2024. Blockchain platforms like Truepic are combatting this problem by integrating blockchain with authentication image, timestamp and media verification at creation time. .
However, centralized governance models often struggle to manage the rapid pace, complexity, and ethical challenges of AI development. Precisely’s global survey found that 62% of organizations view inadequate governance as a major barrier to AI adoption. Decentralized Autonomous Organizations (DAOs), powered by DLT, can provide a solution by automating governance and decision-making through smart contracts.
As AI increasingly relies on cross-border data, secure and transparent systems like DLT will be essential to building trust. Governments, businesses and civil society must collaborate to develop governance frameworks that prioritize the public interest. DAOs must also evolve to provide flexible and collective monitoring as AI technology advances.
The future of ethical AI depends on decisive action today. DLT can lay the foundation for this future: transparent, accountable and aligned with the best interests of humanity.
Post Views: 7