Dersleri yüzünden oldukça stresli bir ruh haline sikiş hikayeleri bürünüp özel matematik dersinden önce rahatlayabilmek için amatör pornolar kendisini yatak odasına kapatan genç adam telefonundan porno resimleri açtığı porno filmini keyifle seyir ederek yatağını mobil porno okşar ruh dinlendirici olduğunu iddia ettikleri özel sex resim bir masaj salonunda çalışan genç masör hem sağlık hem de huzur sikiş için gelip masaj yaptıracak olan kadını gördüğünde porn nutku tutulur tüm gün boyu seksi lezbiyenleri sikiş dikizleyerek onları en savunmasız anlarında fotoğraflayan azılı erkek lavaboya geçerek fotoğraflara bakıp koca yarağını keyifle okşamaya başlar

GET THE APP

Journal of Civil & Legal Sciences - Criminal and civil liability of an Artificial Intelligence
ISSN: 2169-0170

Journal of Civil & Legal Sciences
Open Access

Our Group organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.

Open Access Journals gaining more Readers and Citations
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers

This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)
  • Perspective Article   
  • J Civ Leg Sci, Vol 11(6)
  • DOI: 10.4172/2169-0170.1000333

Criminal and civil liability of an Artificial Intelligence

Sebabatso Motsamai*
Department of Legal Research, Forensic and Risk Investigations Trainee, Johannesburg Metropolitan Area, South Africa, Africa
*Corresponding Author: Sebabatso Motsamai, Department of Legal Research, Forensic and Risk Investigations Trainee, Johannesburg Metropolitan Area, South Africa, Africa, Tel: +279765634363, Email: smotsamai@gmail.com

Received: 23-May-2022 / Manuscript No. JCLS-22-65949 / Editor assigned: 25-May-2022 / PreQC No. JCLS-22-65949(PQ) / Reviewed: 08-Jun-2022 / QC No. JCLS- 22-65949 / Revised: 13-Jun-2022 / Manuscript No. JCLS-22-65949 (R) / Published Date: 20-Jun-2022 DOI: 10.4172/2169-0170.1000333

Abstract

This research paper explores the criminal and civil liability of an Artificial Intelligence (AI), machine or robot for an act or conduct committed independent of human intervention or control influenced by the Cybercrimes Act.The Fourth Industrial has brought so many challenges during hard lockdown in South Africa in 2020, the challenges include Cybercrimes which are not limited to Cyber fraud, phishing and hacking. The government of South Africa responded to the pandemic with a hard lockdown to reduce the spread of the viruses. Many companies responded by introducing remote working, many employees were remotely based and there were no monitoring measures. As a result the Cybercrimes rate skyrocketed in such a way that clients’ information was compromised and businesses lost money due to Cyber fraud. The latest technology has made it easier to commit Cybercrimes. The Protection of Personal Information Act was enacted to protect each and every person’s information that may be compromised, this act helps to reduce theft and misuse of people’s personal information.

Keywords

Criminal liability; AI; Human control or intervention; Foresee-ability; Delictual liabilty; Contractual liability; Automated transactions

Introduction

In several industries in Republic of South Africa, there has been a vast drive towards incorporating artificial intelligence (AI) and machine learning (ML) into business and products to streamline operations, analyse user behaviour and determine or predict potential purchasing behaviour. However, as technology advances at a rapid pace, policymakers and laws have struggled to keep up. The increasing role of AI in the economy and society presents both practical and conceptual challenges for the legal system. Many of the practical challenges stem from the manner in which AI is researched and developed and from the basic problem of controlling the actions of autonomous machines. South Africa has not yet formalised any policy documents or entered bills to parliament for the regulation of AI. However, in April 2019, the President appointed members to the Presidential Commission on the Fourth Industrial Revolution (“4IR Commission”), which will assist the government in taking advantage of the opportunities presented by the digital industrial revolution [1].

The Fourth Industrial has brought so many challenges during hard lockdown in South Africa in 2020, the challenges include cybercrimes which are not limited to cyber fraud, phishing and hacking. The government of South Africa responded to the pandemic with a hard lockdown to reduce the spread of the viruses. Many companies responded by introducing remote working, many employees were remotely based and there were no monitoring measures. As a result the cybercrimes rate skyrocketed in such a way that clients’ information was compromised and businesses lost money due to cyber fraud. The latest technology has made it easier to commit cybercrimes [2]. The Protection of Personal Information Act was enacted to protect each and every person’s information that may be compromised, this act helps to reduce theft and misuse of people’s personal information [3].

The fact that robots, especially self-driving cars, have become part of our daily lives raises novel issues in criminal law. Robots can malfunction and cause serious harm, but as things stand today, they are not suitable recipients of criminal punishment, mainly because they cannot conceive of themselves as morally responsible agents and because they cannot understand the concept of retributive punishment.

Given that criminal law commonly requires mens rea (intending mind), it would seem the recipient of the package, even if she programmed the bot herself, might not be held criminally liable. Humans who produce, program, market, and employ robots are subject to criminal liability for intentional crime if they knowingly use a robot to cause harm to others. A person who allows a self-teaching robot to interact with humans can foresee that the robot might get out of control and cause harm [4].

Currently, AI does not enjoy a separate legal status in South Africa. However, this may have to change in the near future as AI software becomes more and more autonomous and through machine learning, starts making independent decisions outside of the scope of those initially programmed. This change could potentially be facilitated by extending the principle laid out by Corbett CJ in Financial Mail v Sage Holdings, namely that courts tend to view natural and artificial (legal) persons as enjoying the same personality rights in circumstances where it is appropriate to do so [5]. In this particular case, the extension of privacy rights to a company. It follows then that if personality rights (analogous to those conferred on companies) can be extended to “artificial” persons, creating a separate form of legal status for AI may be possible in certain specified circumstances in the future. One important characteristic of AI that poses a challenge to the legal system relates to the concept of foresee-ability. This points to a fundamental difference between the decision-making processes of humans and those of modern AI — differences that can lead AI systems to generate solutions that a human would not expect. Humans, bounded by the cognitive limitations of the human brain, are unable to analyse all or even most of the information at their disposal when faced with time constraints [6].

Criminal laws normally require both an actus reus (an action) and a mens rea (a mental intent). Currently, emergence is essential to define the level of foresee-ability and people’s liability for A.I.’s actions and social valence is essential to define the protections needed against deception in consumer law. When A.I. technology advances and A.I. robots become more like people, emergence will be relevant to determine moral agency and social valence will be relevant to determine moral patience. In both cases, the most applicable characteristic and analog will be determined by the legal question at hand and by the purposes of the law in the regulated relationship [7].

Criminal liability

Criminal responsibility of the programmer of the AI software for cybercrimes committed by the AI independent of human control or intervention

The programmer programmes the AI and may be responsible for everything that concerns the production of the AI, such as hardware, software and other features. The programmer also knows the technology behind the decision-making process in the AI, at least in its state when introducing the AI to the markets. Furthermore, the programmer is also the only actor that may affect the other actors’ expectation of what the AI de facto is capable of. However, a programmer of AI software might design a program in order to commit offenses via the AI entity. For example: A programmer designs software for an operating robot. The robot is intentionally placed in a factory, and its software is designed to torch the factory at night when no one is there. The robot committed the arson, but the programmer is deemed the perpetrator. The programmer’s responsibility is primarily linked to the hardware and software of the AI, including all from mechanical elements to the code and algorithms within, and the education and training of the AI. The programmer may influence the AI in any area, since the code is the AI’s brain, i.e. its core and the key to everything the AI is capable to do. A malfunction that is a consequence of a programmer who is at fault, will probably be traced back to the programmer [8].

Criminal responsibility of the user or end-user of the AI software for cybercrimes committed by the AI independent of human control or intervention

The second person who might be considered the perpetrator is the user of the AI. The user did not program the software, but he uses the AI entity, including its software, for his own benefit [9]. For example, a user purchases a servant-robot, which is designed to execute any order given by its master. The robot identifies the specific user as the master, and the master orders the robot to assault any invader of the house. The robot executes the order exactly as ordered. This is not different than a person who orders his dog to attack any trespasser. The robot committed the assault, but the user is deemed the perpetrator. However, it is suggested that the user’s and supervisor’s responsibilities are primarily linked to the use of the AI, i.e. when the AI is performing something. These actors may impact the AI by remotely control it, by giving exact instructions or by omitting to intervene and override the AI’s decisions [10].

Moreover, a user can, for instance, remotely control a drone and intentionally fly it into an airplane or give the drone the exact instructions for how to fly while up in the air. From a liability perspective, the first of the aforementioned examples are not that difficult to solve, if you consider the drone as a simple tool used to injure the airplane [11]. A further example is that of an iPhone user, an iPhone user commits murder and asks Siri for advice on how to hide a body, and Siri responds with helpful advice that leads to his temporary success in hiding the crime. Would Siri be an accessory? To answer this question, we would need to allocate responsibility for the outcome (hiding the body). We should then focus on Siri’s level of emergence, rather than her social valence, the question does not concern whether the iPhone user could be deceived but rather whether anyone was in a position to predict the outcome, and what incentives should be set for people in such a situation going forward; this will determine if anyone should be seen as an accessory to the crime committed by the user [12].

In both scenarios, the actual offense was committed by the AI entity. The programmer or the user did not perform any action conforming to the definition of a specific offense; therefore, they do not meet the actus reus requirement of the specific offense [13]. The programmer had criminal intent when he ordered the commission of the arson, and the user had criminal intent when he ordered the commission of the assault, even though these offenses were actually committed through a robot, an AI entity. When an end-user makes instrumental usage of an innocent agent to commit a crime, the end-user is deemed the perpetrator. The owner will in almost every case coincide with the user or the supervisor, and before sold the owner coincides with the producer [14].

If a criminal offence is being considered, what mens rea is required. It seems unlikely that AI programs will contravene laws that require knowledge that a criminal act was being committed; but it is very possible they might contravene laws for which ‘a reasonable man would have known’ that a course of action could lead to an offence, and it is almost certain that they could contravene strict liability offences [15]. Thus, AI programs may also be held liable for strict liability offences, in which case the programmer is likely to be found at fault. In all cases where the programmer is deemed liable, there may be further debates whether the fault lies with the programmer; the program designer; the expert who provided the knowledge; or the manager who appointed the inadequate expert, program designer or programmer. AI criminal liability may be the solution of the future [16].

Whilst impact and ability will be the main determinants of responsibility as matters for causation of the consequences of the crime, they may also be important for criminal liability. Since the discussion here is confined to liability for crimes an AI commits, and we know that an AI is not legally accountable for its conduct, we need to trace the criminal behaviour back to a human behind the AI. That human must be in a position where he or she has a possibility to influence the AI and its conduct in one way or another. Seemingly this will be determined through consideration of the specific circumstances of each alleged crime [17].

The general basis for criminal liability is usually the act requirement. Only human acts can be a ground for imposing a punishment. An AI’s crime must be possible to ascribe to a human, that can fulfil the elements for criminal liability, actus reus and mens rea. In order to analyse the actus reus element, it is necessary to identify the actors involved in the AI and its decision- making. The first obvious actor is the user. The user is the person who launches the AI in the first place and instructs it about its tasks and is benefitted by the AIs work. The tendency thus far is that the user, together with the supervisor has been targeted in criminal investigations concerning AIs’ behaviour. The next possible actor is the supervisor, who oversees the AI and has the possibility to intervene in the AI’s decision-making if necessary [18].

In criminal law the principal, must normally have mens rea necessary for the relevant crime. If an AI engineer creates an AI system for making toast and then that machine burns down a house, killing everyone in it, on the reasoning that “all the bread would be toasted”, then the programmer may face criminal consequences for their reckless behaviour in creating such a programme [19]. Gabriel Hallevy describes this as a “natural-probable-cause” liability, holding that it “seems legally suitable for situations in an AI entity committed an offense, while the programmer or user had no knowledge of it , had not intended it and had not participated in it” [20].

Moreover, punishing robots assumes that traditional concepts of intent and knowledge apply to AI-powered machines [21]. At present, there exists no law setting out the legal obligations of robots themselves. As such, the answer to the question of who should be held liable when a robot does harm to a human being or causes damage cannot be found in existing legislation. Perhaps new technological advancements in this area require the introduction of new and modernized forms of law. Therefore, robot manufacturers must have a criminal code for robots which will help to reduce ambiguities by providing a minimum set of moral standards to which all smart robots must adhere. Modern Intelligent Agents can make decisions based on an evaluation of their options. They can be taught to react to “moral dilemmas”, that is, to choose to forego the pursuit of a goal if the goal can only be achieved by causing significant collateral harm [22].

One might think that smart robots cannot be held liable for their actions because they are not susceptible to punishment [23]. If a human intentionally or knowingly programs a robot so that it causes harm to a person, the programmer’s criminal responsibility can easily be established on the basis of traditional concepts of attribution and mens rea: The programmer commits the criminal act by using the robot irrespective of its artificial intelligence as a tool for carrying out the programmer’s intention, and she does so with the requisite intent or knowledge. The standards of due attention and due care are geared toward human beings. They cannot simply be transferred to robots because robots cannot “foresee” consequences they have not been programmed to foresee. Tolerance for robot malfunctions must however be subject to strict limitations [24]. The challenge remains to strike a fair balance between society’s interest in promoting innovation and the dangers associated with the use of Intelligent Agents with destructive potential. One factor to be considered in the balancing process is the social benefit of the robot at issue in relation to its potential for harm [25].

One important characteristic of AI that poses a challenge to the legal system relates to the concept of foresee-ability

Anyone carrying out or involved in activities that pose a “serious risk of harm, cannot be made safe, and [are] not common to the community”, is strictly liable for injuries to other people.

This rule of law originated from the old English case Rylands. v. Fletcher, where the defendant was held liable for injuries that resulted from a water reservoir that flooded the mineshaft of his neighbor [26]. One requirement the English courts took especially seriously was that the dangerous “thing” must escape the owner’s property and cause mischief somewhere else. In general, the American and English cases are very similar, declaring the storage of large quantities of water in tanks, the possession of explosives and flammable liquids, or the operation of drilling devices as abnormally dangerous. Furthermore, the courts require some “special circumstances in the locality” and a “non-natural” or “exceptional” use of the land [27].

First, providing redress for persons injured through no fault of their own is an important value in its own right. The idea that individuals should bear a loss that is visited upon them, even when the causal failure is inexplicable, runs counter to basic notions of fairness,compensatory justice, and the apportionment of risk in society [28]. Second, a strict liability regime is warranted because, in contrast to the injured party, the vehicle’s creators are in a position to either absorb the costs, or through pricing decisions, to spread the burden of loss widely. After all, it is not unreasonable that the costs of inexplicable accidents be borne, at least in part, by those who benefit from riskreducing, innovative products. Third, a strict liability regime will spare all concerned the enormous transaction costs that would be expended if parties had to litigate liability issues involving driver-less cars where fault cannot be established [29].

Discussion

However, autonomous systems could simply be dangerous in different ways than envisioned in the initial rule of law. The different areas of application, the type of the robot, and the specific characteristics of robots could be important factors. Cerka et al. define a source of danger as a “specific object of the physical world that has specific properties” and portray AI as a fitting example [30].Its dangerousness stems from its ability to gather information from the environment and respond autonomously. Consequently, they hold the AI developer liable for damages resulting from software agents. However, focusing on autonomous robotic machines in the physical world, liability could also be attributed to the owners or users of robots, who deploy these systems for their benefit [31].

There Cybercrimes Act provides us with the position criminal and civil liability of an Artificial Intelligence (AI), it is vital that we have effective and legitimate mechanisms that will prevent and forestall human rights violations, given the speed and scale at which many advanced digital systems operate in ways that pose substantial threats to human rights without necessarily generating substantial risks of tangible harm [32]. A preventative approach is especially important given that such threats could seriously erode the social foundations necessary for moral and democratic orders, which are essential preconditions for the exercise of individual freedom, autonomy and human rights. This may include both a need to develop collective complaints mechanisms to facilitate effective rights protection, and to enhance and reinvigorate our existing conceptions and understandings of human rights [33].

Civil liability

Whilst all of the legal issues highlighted are of critical importance, the most obvious question that will no doubt be at the forefront of a consumer’s mind is the liability regime pertaining to AI in the event of a malfunction and/or damage caused. In the absence of a separate legal personality regime for AI, these issues are generally productcentric and are governed by the specific consumer protection and product legislation set out below. Notably, this legislation does not detract from the remedies available under the law of contract (such as breach of warranty) and the law of delict (such as patrimonial and nonpatrimonial loss) [34].

Delictual liability

In South Africa, civil liability can be divided into delictual and contractual liability. Currently AI isn’t recognised as having its own civil liability. A delict occurs when one party commits a wrong against another. The basic elements of delict are conduct, wrongfulness, fault, causation and damage [35]. Furthermore, for a plaintiff to establish a civil liability claim, such plaintiff must establish that the defendant acted negligently or with intention. However, an exception to this is strict liability for example vicarious liability in employment relationships.

Damage caused by the use of AI robot will be compensable in terms of the South African law of delict if such use constitutes a wrongful and culpable act that causes harm. Damages for patrimonial loss suffered can be claimed under the Aquilian action, and the action for pain and suffering can be instituted for non-patrimonial loss suffered [36].

Contractual

The licensing contract under which software is usually supplied, is unknown to the South African common law and has no naturalia which will determine its scope. Whether a licensing agreement is concluded or not, the provisions of the Copyright Act applies in any case to the copyright of software. A legitimate user is entitled to make back-up copies for recovery purposes. As such it is an innominate contract to which the general principles of contract law Where damage is incurred by the use of AI's, contractual liability may occur between parties in contract. The contractual liability will depend on the type of contract(s) in existence which, in the case of software use, consists of at least two types of contracts, namely the licensing contract between the developer and the user, and the acquisition contract between the supplier arid the user. Before a contractual action can be instituted the requirements for a valid contract must be met in terms of the general principles of contract.

When it comes to the unforeseeable but harmful acts of autonomous robotic machines, our laws must find a balance between a robot’s “parent”, who might not be guilty, and the “equally blameless victim” [37]. In order to regulate the specific risks of these intelligent robots, a strict liability regime seems appropriate, since the role of human control and, thereby, the probability of fault will decrease. it seems that fault-based liability regimes are not fully capable of determining liability when it comes to the specific risks of autonomous robots. A strict liability regime would, on the other hand, ensure legal certainty and provide compensation for victims. However, the construction of a new doctrine poses challenges for courts and legal scholars across the world [38].

Automated transactions - ECTA

Section 20(c) of ECTA creates a rebuttable presumption that the parties to an automated transaction are bound by its terms irrespective of whether they have reviewed the contents of the contract. Section 25(c) of ECTA goes on to place the liability for the consequences of an automated transaction squarely on the shoulders of the programmer of the electronic agent, alternatively the person for whom the electronic agent was programmed. This remedy is subject to the caveat that the programmer may escape liability if it can be shown that the electronic agent deviated from its programming when concluding the contract [39].

South Africa has to leverage the unlimited opportunities that exist in artificial intelligence (AI). It can, and should, do this by moving from being a nation of passive consumers to providing solutions in conceptualising, designing, testing, and benchmarking AI products and technologies. This is so because in the Fourth Industrial Revolution, there is no one country that can cover all technologies that are needed for different sets of scenarios. August argues that once it is established that robot-humans can exist, logic, ethics and openminded morality dictate that they are given equal rights with humans because to discriminate against them on the basis of the "softness" or "hardness" of their body parts is just as unreasonable as discriminatory treatment on the basis of skin colour. Contrary to Cole who is of the opinion that in the near future, AI will not be granted legal status either as an independent or even a semi-independent entity as the technical problems in creating truly independent AI (in the sense that it is capable of learning, growth, change, consciousness and self-consciousness) are still insurmountable [40]. However, once AI has overcome these problems, the possibility of according some legal status to such entities is acknowledged [41].

To conclude, there are many different kinds of AIs but they all share a few common features; unaccountability, unpredictability and autonomy [42]. These characteristics are also the primary reasons behind the liability problem. Unpredictability together with autonomy limit the potential defendants to humans who have a duty to act, and as a result liability can in some cases not be established for actors who should be liable. The primary cause of that issue is the lack of relevant causation when the AI acts autonomously without involving any human. The rule of law restrains the possible criminal behaviour for humans to controlled acts and omissions, which are voluntary. An act that is not willed, are not voluntary. If the AI acts autonomously, there is no established causal chain between the defendant and the AI, unless the launch or use of AI alone is harmful.

Conclusion

Criminal law targets humans, and if we want to maintain the retributive and deterrent functions of punishment in criminal law, we need to direct the law at humans with the possibility of moral accountability, i.e. the humans behind and not the AI itself. The supervisory duty is De-facto directed at the humans behind yet is not the perfect way to solve the liability problem. AI criminal liability would solve the liability problem, since the AI itself is then always liable for its own actions, but before that, the AI must possess certain capacities, which in the current state of art are still absent. In the future, there is a good chance an AI can fulfil the requirements for criminal liability. Until then, the liability problem persists. At the moment, the AI and its principals levitate in an empty space without a clear notion of what is right and what is wrong in criminal law.

Acknowledgement

None

Conflict of Interest

None

References

  1. Pagallo U (2013). The Law of Robots Crimes, Contracts and Torts.Dordecht NY:45-54.
  2. Indexed at, Google Scholar

  3. https://www.iol.co.za/weekend-argus/news/%20concern-over-increase-in-cyber-crime-as-sa-ranked-sixth-%20worldwide-for-cybercrimes
  4. https://www.popiact-compliance.co.za/popia-information.
  5. http://www.vmslaw.edu.in/legal-thought-and-practice-in-pre-liberation-and-post0liberation-goa-1950-1970/
  6. Neethling J (2004).The Protection of the Right to Privacy against Fixation of Private Facts.SALJ US 121:519-524.
  7. Indexed at, Google Scholar

  8. http://www.saflii.org/za/cases/ZASCA/1993/3.html
  9. Scherer MU (2016).Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.Harv J L & Tech US 29:1-48.
  10. Indexed at, Google Scholar, Crossref

  11. https://link.springer.com/content/pdf/bbm%3A978-94-015-7706-9%2F1.pdf
  12. https://www.un.org/dppa/decolonization/sites/www.un.org.dppa.decolonization/files/decon_num_18-2.pdf
  13. Lisinski RP (2018). The Current South African Legal Position on Artificial Intelligence: What Can We Learn from the United States and Europe?.Law manag SA:1-76.
  14. Indexed at, Google Scholar

  15. https://books.google.co.in/books?id=RQDXu9xDlDMC&pg=PA306&lpg=PA306&dq=Ibid+7+p10
  16. https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html
  17. , https://www.jstor.org/stable/319211
  18. Hallevy G (2016).The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal Social Control.Akron Intell Prop J US:1-33.
  19. https://onlinelibrary.wiley.com/doi/full/10.1111/j.1741-5446.1962.tb00069.x
  20. King TC (2020). Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions Science and Engineering Ethics.Sci Eng Ethics EU 26:89-120.
  21. Indexed at, Google Scholar, Crossref

  22. https://www.scirp.org/(S(i43dyn45teexjx455qlt3d2q))/reference/ReferencesPapers.aspx?ReferenceID=1272381
  23. https://www.dailymaverick.co.za/opinionista/2018-10-04-the-robots-among-us-how-should-we-manage-them/
  24. Vladeck (2014).Machines Without Principals: Liability Rules and Artificial Intelligence.Washington Law Rev US 89:117-150.
  25.          Indexed at, Google Scholar

  26. Turner J (2019). Turner Robots Rules:Regulating Artificial Intelligence.Palgrave Macmillan Cham UK:1-377.
  27. IndexedAt , Google Scholar, Crossref

  28. http://ieeexplore.ieee.org/abstract/document/962473/similar
  29. Alheit K (1997). Issues of Civil Liability Arising from the Use of Expert Systems.ACMDLSA:1-1.
  30. Indexed at, Google Scholar

  31. https://www.jstor.org/stable/j.ctv26d9d0.16
  32. Havelley G (2014). Liability for Crimes Involving Artificial Intelligence Systems.Rakoto kobo USA:229-257.
  33. Indexed at, Google Scholar

  34. https://www.tech4law.co.za/business/law-business-business/ai-regulation-in-south-africa/
  35. https://books.google.co.in/books?id=KWYSAAAAQBAJ&pg=PA118&lpg=PA118&dq=Ibid+21&source=bl&ots=tfpdiQn01h
  36. https://www.lawteacher.net/cases/rylands-v-fletcher.php
  37. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ah
  38. Karlsson CM (2017). Artificial Intelligence and the External Element of the Crime an Analysis of the Liability Problem.Orebro EU:1-51.
  39. Indexed at, GoogleScholar

  40. https://www.icty.org/x/cases/kunarac/tjug/en/foot.htm
  41. Cerka P (2015). Liability for damages caused by artificial intelligence. CLSR EU 31 :376- 389.
  42. Indexed at, Google Scholar, Crossref

  43. https://link.springer.com/content/pdf/bbm%3A978-1-349-21776-2%2F1.pdf
  44. https://www.werksmans.com/legal-updates-and-opinions/your-actions-in-cyberspace-can-land-you-in-prison/
  45. Yeung K (2019). A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework.CDE EU :1-96.
  46. Indexed at, Google Scholar

  47. https://books.google.co.in/books?id=K3gPEAAAQBAJ&pg=PA215&lpg=PA215&dq=Ibid+30+p13
  48. Neethling J, Potgieter JM, Visser PJ (2015). Law of Delict.7th Edn Lexis Nexis SA:1-449.
  49. Indexed at, Google Scholar

  50. https://link.springer.com/content/pdf/bbm%3A978-94-015-7706-9%2F1.pdf
  51. Lehmann WSN (1981).Frankenstein unbound: Towards a legal definition of artificial intelligence.  Futures UK 13:442- 457.
  52. Indexed at, GoogleScholar, CrossRef

  53. Atabekov A, Yastrebov O (2018). Legal Status of Artificial Intelligence Across Countries: Legislation on the Move. Eu Res St JEU 21:773-782.
  54. Indexed at, GoogleScholar

  55. https://www.itu.int/ITUD/projects/ITU_EC_ACP/hipssa/Activities/SA/docs/SA1_Legislations/South%20Africa/ElecComm.PDF
  56. Augus R (1988). Corpus iuris roboticum.  CLJ EU8:375-388.
  57.  IndexedAt, GoogleScholar

  58. Cole GS (1990). Tort liability for artificial intelligence and expert systems. CLJ EU:127-231.
  59. Indexed at, GoogleScholar

Citation: Motsamai S (2022) Criminal and Civil Liability of an Artificial Intelligence (AI) for Cybercrimes, Machine or Robot for an Act or Conduct Committed Independent of Human Intervention or Control. J Civil Legal Sci 11: 333. DOI: 10.4172/2169-0170.1000333

Copyright: © 2022 Motsamai S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Top