INTRODUCTION
To prove a crime, one must find the possible solutions and the relevant evidence pertaining to it. In the context of cyber-crimes or the cyber - related offenses the most difficult part is the digital evidence which is generated by Artificial Intelligence. In the furtherance of this the AI generated evidence should be stored, processed and collected in a volatile manner which preserves its integrity and evidentiary value.
With the exponential rise in cybercrimes across India, the applicability of the AI generated evidence1 in the cybercrime's prosecution becomes more urgent. Messages that are sent through email, records of phone usage, payments made online and user activity on social media platforms such as Facebook and Twitter constitute a rich and evidentiary ecosystem.
The Bhartiya Nagrik Suraksha Adhiniyan 2023 updates the law of evidence by expanding the definition and admissibility of digital records. The Bhartiya Nagrik Suraksha Adhiniyan makes procedural updates to electronic reporting, digital statements, jurisdictional issues and mandatory forensic investigations. Through this Article we will understand the role of AI generated evidence in the investigation of the cybercrimes and what implications does it hold for the future. The article also stresses upon the admissibility of the AI generated evidence and whether there are reforms that are needed in the Bhartiya Nagrik Suraksha Adhiniyan.
Meaning Of AI - Generated Evidence
AI generated evidence can be termed as the machine generated opinions which can be used to establish digital facts such as identify suspects, tracing digital footprints or the proving of a occurrence of a crime. The evidence or the conclusions are being processed, produced or enhanced by using artificial intelligence algorithms rather than solely by a human effort.
Current Legal Position in the Evidence Act
The Indian Evidence Act was passed in 1872 and replaced by the Bhartiya Nagrik Suraksha Adhiniyan in 2023. It was introduced before the advent of the computers and the electronic communications. Section 65A and Section 65B was amended in the act through the Information Technology Act 2004 in order to acknowledge the electronic records. This paved the way for the introduction of the electronic evidence in the investigations. AI generated evidence is termed under the bracket of the electronic evidence, thus making it admissible in the court. The AI generated evidence is being considered in the electronic records if it is authenticated under Section 65B of the Indian Evidence Act read with Section 63 of the Bhartiya Nagrik Suraksha Adhiniyan.
The admissibility of the AI generated evidence is governed by the fulfillment of the conditions that are laid down in Section 65A of the Indian Evidence Act read with Section 62 of the Bhartiya Nagrik Suraksha Adhiniyan 2023. Section 65A states that the electronic records should be proved in accordance with the conditions laid down in Section 65B of the Indian Evidence Act read with Section 63 of the Bhartiya Nagrik
Suraksha Adhiniyan.
In the landmark ruling of the Anvar P.V. vs P.K. Basheer the Supreme Court held that the electronic records would be admissible only when they are accompanied by a certificate under Section 65B of the Indian Evidence Act. Subsequent rulings like the Shafi Mohammed vs State of Himachal Pradesh overturned the rulings and said that the certificate is not required under Section 65B. Adding to it, the Supreme Court in the
ruling of Arjun Panditrao Khotkar vs Kailash Kishan Rao Gorantyal.
Complications And Loopholes
The previous section showed the inconsistencies and uncertainties in the rulings issued by the Supreme Court. The dilemma on whether the certificate is required or not required makes the admissibility of evidence more complicated. As per the readings of the Bhartiya Nagrik Suraksha Adhiniyan and the Indian Evidence Act it clearly states that the evidence would only be considered as admissible if it is incorporated with the certificate.
Any evidence that is being considered important like the CCTV Footage or the call records are not being taken into judicial scrutiny as the certificate is not present. In the context of AI, the problem gets further complicated. A critical question arises that how the algorithmic process through which the output is generated can be certified under the 65B. It is unclear from the act that whether the certification will only be
confined to the evidence that is being stored or the AI system that was used in the generation of the output will also come under the certification.
Reliability and The Black Box Problem
The core basis of the evidence presented in the court is reliability. The evidence should be reliable which makes it admissible in the court and a valuable asset in the investigations. Section 45 of the Indian Evidence Act read with Sec 39(2) of the Bhartiya Nagrik Suraksha Adhiniyan acknowledges the admissibility of the expert opinion subject to the assessment and examination. In parallel to the AI, this problem can be seen from a bigger perspective as there is no reasonable opportunity to investigate the reasoning process. The methodologies adopted by the machine learning cannot be evaluated as the output that is being derived are backed by the proprietary algorithms. (“Black Box”).
Chain Of Custody
The principle of chain of custody guarantees that the evidence presented in the court has not been damaged or altered. The necessity of such doctrine is to maintain the integrity of the evidence. In the context of the traditional evidence, it is always needed that the data or the evidence comes from a certain identifiable source and has not been changed in custody.
But for the AI generated or the AI processed evidence it is not the same case. In the case of AI generated evidence, not only the reliability of the source is being authenticated. but also, the algorithmic process that is being used is also authenticated.
The Supreme Court on various occasions have questioned the validity and the evidentiary value of the evidence. In the landmark case of Selvi vs State of Maharashtra the polygraph and the NARCO - analysis test findings were inadmissible due to issues with their scientific validity and admissibility. In the landmark case of Delhi Riots judges pointed out the lack of peer- reviewed accuracy studies in the public domain while examining the use of facial recognition technologies. The AI generated outputs if admitted as evidence exposes to the privacy issues of the citizens.
Proposed Amendments
In the Indian Evidence Act and also established in this article Section 65B is primarily concerned with the admissibility of the electronic records. In the context of the AI evidence Section 65B stands silent as the meaning of electronic records here consists of emails, DVDs, CDs, computer printouts, WhatsApp chats etc. With the advent of AI generated evidence, it brings many loopholes and inconsistencies in the Act itself.
Artificial Intelligence system’s outputs in contrast to the traditional electronic documents, are usually probabilistic rather than deterministic. It is impacted by the intricate and often opaque datasets and in some situations hidden biases. Courts continue to evaluate such evidence using legal frameworks intended for conventional, human authored electronic records which present significant challenges due to the
basic divergence.
A new and separate provision should be added in the act which will govern the provisions of the admissibility of the AI generated evidence. Although Section 65B of the Act may serve as a framework for this proposed provision, it may also be noted that Sec 65C must encompass the dynamic, distinct and autonomous nature of the AI systems.
A party who is wishing to rely on the evidence which is derivated by AI should disclose all the details relating to the AI system, algorithms, flow structure, etc in the certificate. The underlying structure of the system, the type and source of its training data, known constraints and error margins, recorded bias assessments
and the results of validation or audit studies carried out before, concurrently or during the process.
This gives an opportunity to the opposite party and the court to examine the evidence and thoroughly check the validity and applicability of the evidence with the use of such disclosures.
This section can also contain a clause where there should be a testimony from a qualified human expert who can verify the AI generated evidence used. The testimony could also contain the explanation as to on what grounds did the AI arrive at a certain conclusion upholding the principles of natural justice.
Conclusion
There is no specific provision which talks about the admissibility of artificial intelligence in the context of the cybercrimes. Cybercrimes have increased in tandem with the exponential growth in the digital technology. With the evolution of the technology and the growing digital offence, the AI generated evidence can be big
boost. The AI generated evidence can help in identifying data patterns, uncovering complex cyber networks and thus making the process of cybercrime prosecution in a very streamlined manner.
The AI generated evidence should not be followed blindly and should be interpreted in the line of the provisions established under the Bhartiya Nyaya Suraksha Adhiniyan.
The evidence should be reliable, accompanied with the description of the methodologies that are being used.
Looking forward the AI generated evidence can be seen as the future for the next couple of years. The biggest challenge in this context would not be to accept the AI generated evidence but to establish a clear and ethical legal principle for the same. It should strike a balance between the technological innovation with the constitutional values and the elementary safeguards.
- - By Aaryansh Agarwal