Artificial Intelligence (AI) in the Healthcare Industry

Artificial Intelligence in the Healthcare Industry

AI (artificial intelligence) within the health care sector denotes the application of advanced software and complex algorithms to imitate human perception in analyzing intricate medical data. AI technology differs from traditional technologies linked to the health care industry in the potential to get data, analyze it, and outline a detailed conclusion to a respective user. Machine learning algorithms empower these capabilities in AI technology (Price & Nicholson, 2017). The algorithms in AI can detect behavioral patterns and form its logic. Experts regularly test AI algorithms to lower the error margin. Here, there are two differences between the behavior of humans and AI algorithms. First, algorithms are accurate. Besides, if one sets a purpose, the algorithm does not change itself and only recognize what has been instructed overtly. Second, algorithms denote black boxes. The second difference implies that, while algorithms can estimate correct goals, they do not test the reasons or causes for that outcome. AI thus represents the computer algorithms potential to reckon inferences without direct human contribution.

Artificial intelligence is swiftly progressing to transform the health care industry. Propelled by big data as well potent systems of machine learning, pioneers create tools to enhance the clinical care process, to improve the deficiency, and enhance medical research. These tools depend on programs and algorithms developed from health care information that can suggest recommendations or predictions. Experts consider the algorithms as mostly hard to explicitly state or understand their reasoning (Hsu & Elmore, 2019). Because of this, some professionals consider these algorithms as “black-box” (Reid, 2017). The research focuses on the perception of artificial intelligence, which entails the likely applications. The paper also describes the legal implications of AI in medicine including regulation, privacy, tort, and intellectual property law. AI in the health care industry seems beneficial, it includes some business, ethical, and legal concerns worth addressing to improve the primacy of this technology.


AI in Medicine

Like most other sectors, medicine features two current developments. The first development revolves around the increase of big data. The other development is the rise of complex AI or machine learning techniques that may be relevant in determining sophisticated patterns in such information (Reid, 2017). The subject of big data is designated by the velocity (quick access to data), volume (large data quantities), and variety (the data’s heterogeneity). In the medical context, the data arises from various sources. Some of these sources include medical literature, pharmacy records, electronic health records, insurance claims data, clinical trials, and data that patients record on their fitness trackers or enter on their smartphones. Machine learning techniques, an AI’s subdivision, use iterative techniques and simple learning rules to use or find patterns in these vast data amounts (Forcier et al., 2019). The ensuing algorithms can group sets and make predictions. In that, the resulting algorithms can estimate how long a patient can live based on the symptoms’ collection. Such techniques cannot clarify how and why they attain that conclusion. These techniques either cannot justify it all or they can offer accurate explanations but irrelevant in medical understanding. Such an intrinsic opacity may or might not be increased with conscious privacy regarding the validation and development of the algorithms. While medical experts describe this field as predictive analytics or AI, this study defines it as “black-box medicine” for that inherent opacity. Adding to the intricacy, if more data exists for the learning algorithms of a machine, professionals can adjust the algorithms or refine future estimations by incorporating those data. The algorithms linked to black-box medicine change over time and are thus opaque. The rise in both big data and complex artificial intelligence is a substantial development intended to improve the healthcare system.

Black-box medicine presents various applications across the health care industry. Some of its potential uses include prognostics, resource allocation, diagnostics, treatment recommendations, and image analysis. Here, machine learning dominates the image recognition situation, and pioneers have already established an algorithm that can detect skin cancer after testing skin lesions’ images. Hsu and Elmore (2019) add that experts might use these algorithms primarily in image analysis to replace or aid in most operations of anatomical radiologists and pathologists after some years. Following this further, another present algorithm can estimate which victims of trauma might hemorrhage by consistently testing crucial indications and can need mediation to forestall disaster (Price, 2017). These prognostic algorithms could be prevalent in a few years. Besides this, black-box algorithms might enhance the process of diagnosis. For instance, experts can use the black-box algorithms to direct research into new conditions or diseases, to suggest off-label applications for current drugs, and to identify challenging medical behavior or fraud. Also, professionals can use the algorithms to distribute inadequate resources to the ideal and needy patients. Besides, Price (2017) states that some smartphone apps are already using these black-box systems to detect issues among infants (like developmental disorders according to facial characteristics) or young children (autism based on tracking of eye movement). While black-box medicine comes with significant benefits, it also includes various challenges. Most of these disadvantages are often scientific, legal, and medical. As a result, pioneers need to establish efficient and safe black-box medicine. Experts also need more evaluation on how professionals can enhance their successful deployment and efficient development. Another challenge revolves around how people can improve patient privacy while still protecting the patient during the process. Black-box medicine enhances care in many areas across the health care system but also features a few challenges which make its primacy questionable.



People need to ensure that such algorithms are exceptional. In that, the first issue to put across is the most basic. Researchers should ensure that black-box systems can work safely and accordingly. The FDA (Food and Drug Administration) often controls the efficacy and safety of emerging and new medical devices and technologies (Price & Nicholson, 2015). The relatively complex puzzle of the characterization of a health device determines whether the FDA is legally allowed to control independent algorithms used to improve medical decision making. Following this further, regulating these medical algorithms by FDA might contradict its established declaration that it never controls the medical practice (Price & Nicholson, 2015). Nonetheless, the FDA is legally allowed to regulate independent algorithms and often to regulate medical devices. However, disputes might emerge now. Industry dynamics are among the determining factors in this case. For instance, the heart of most AI innovation, Silicon Valley, does not operate under the FDA’s regulation. Failing to work under the FDA is among the challenges in ensuring that such algorithms work.

If the FDA will or can control AI in the system of health care (the organization has stated this intent and ability), some tools help certify the efficacy and safety of emerging medical technology. Clinical trials and scientific understanding are among these tools. Both tools are not effective enough to match the black-box medicine context. Various reasons validate why understanding does not work. People are not aware of the decision-making process associated with the black-box algorithm and the machine learning approaches do not explain their reasoning. Although they occasionally validate their reasoning, most people find these finding hard to understand (Price & Nicholson, 2017). Testing the validity, safety, and efficacy using clinical trials occasionally works for few algorithms but is irrelevant for most algorithms (Forcier et al., 2019).  Professionals can use clinical trials for algorithms dividing patients into categories and outline a specific treatment. The application of irrefutable trials may be infeasible for most systems featuring highly individualized treatment recommendations or predictions. Despite the existence of the algorithms workable with tests, the advantages of black-box medicine (such as economical, quick shortcuts to unreachable health knowledge) might be curtailed or deferred because of the costly investment of clinical trials. While the algorithms including more data, they have more pronounced challenges. Therefore, in black-box medication, conventional approaches to trying innovative medical devices and technologies occasionally might be infeasible, and might not stifle or slow innovation in others.

Now, this context seeks to determine the approaches that the FDA should adopt in addressing that challenge. The ideal path might be more adaptable than inflexible. The path entails relatively gentle pre-market examination majored on practical safeguards (such as the development techniques, the validation procedures, and the quality of the information deployed) along with a strong post-market supervision while such algorithms join clinical care. Handelman and his colleagues (2019) note that the FDA is interested in this path. Yet, the approach is easier expressed than implemented. In that, the corresponding instance of post-market oversight for medications is hard to realize. The gorgeous likelihood would require the FDA to allow surveillance help from other complex healthcare units. This possibility would need the FDA to partner with other sophisticated entities and, critically, enabling methods to offer them useful and important data., physician specialty associations, hospitals, and insurance companies are interested in enhancing the operation of black-box systems to aid patients or their primary considerations. Also, contending developers might be interested in confirming that these algorithms work to identify issues with current algorithms. Pursuing this further, these advanced entities might also execute these evaluations, particularly because they are accustomed to clinical care, and give performance information. However, implementing this kind of cooperative governance position needs data, and most developers of algorithms are hesitant to reveal that data with any other units. Perhaps, the FDA could take a central data-sharing part, enabling other entities to contribute to controlling black-box medicine. Precisely how such an idea may be realized remains to be an unresolved puzzle.



What might people do when black-box fails? Here, black-box medicine interacts with the law of tort in various contexts. For instance, if regulations do not foster high-quality algorithms or if laws established into the algorithms exist, then the creators of algorithms and tools that depend on them) can be responsible under the law of tort. Judges have been hesitant to apply and extend invention liability models to software engineers. Courts are unwilling in the healthcare software setting (Wang, Casalino, & Khullar, 2019). Some of this hesitancy arises from the element it considers healthcare software as far as a technology that enhances the decision-making process by offering them analysis and information, whereby the verdict depends on the medical attendant. Black-box remedy twists that concept over. Now, this creates uncertainty in whether healthcare professionals should be liable for verdicts made or suggested by black-box systems that they cannot or do not recognize.

The ambiguity concept presents another uncertainty of what healthcare institutions and healthcare providers should do to accomplish their care roles to needy in a industry featuring black-box systems. It is unclear whether healthcare providers should test the eminence of black-box processes, according to practical measures (routine data or validation undertaken) before depending on such systems in the path of delivering care (Price & Nicholson, 2017). Another uncertainty is whether healthcare organizations should conduct the same valuations before instigating black-box software. On the same note, this paper claims that institutions should test these algorithms’ quality. By now, the data relevant for that kind of appraisal is mostly inaccessible, like the corresponding regulatory context stated earlier (Wang, Casalino, & Khullar, 2019). Likewise, it is also unclear whether a healthcare provider should predict the recommendation when an algorithm presents a common mediation but seems dangerous, unhelpful, expensive, or useless. Physicians are skilled to serve patients, so, they should predict the recommendation. If these professionals only apply those verdicts that would have achieved by themselves, they would omit most benefits of black-box medicine from unreachable big data patterns. This aspect would omit all benefits. Besides, algorithms can still conduct normal analyses more cheaply and quickly. There is a cost for excessive caution. While this is among the uncertainties or unresolved issues by the courts, they will have to address those concerns.


Intellectual Property

The protection of intellectual property presents another series of issues for black-box medicine development. In that, when businesses venture in the development of black-box algorithms, there is a challenge on how those firms may protect their venture. Besides, establishing black-box algorithms may be costly (Hsu & Elmore, 2019). Designers should gain, generate, or assemble the many sets of data necessary for teaching their algorithms. Besides this, developers should assemble their resources and expertise to create those algorithms. Last, developers should confirm that those algorithms work. Usually, people may assume logical property to offer fortification for the data properties formed by those expenses so that businesses are ready to capitalize on the needed assets for their advancement without worrying that others might appropriate the ensuing inventions. This perception implies that intellectual property of black-box medicine is limited.

Although patents denote a natural choice of protecting technological innovation, they do not offer robust enticements for black-box treatment. A series of current verdicts by the Supreme Court in the United States, inferring the Patent section governing patentable focus, has hardened to use black-box systems. The Supreme Court restated its firm declaration that these laws should not be patented (Pasquale, 2015). The judge used that act to an investigation that adopted the quantity of a metabolite in the blood of a person to change a drug’s dosage which most considered as a patentable relevance of that rule. Following this further, the U.S. ruling used an imprecise linguistic to annul the patent (Pasquale, 2015). Conventional, well-understood, routine activity formerly involved in by experts working under the sector is inadequate to change an unpatentable nature law into a patentable deployment of that law. Besides this, where underlying data regarding the biological field is the focus of the invention, adopting that that data to direct medical treatment unpatentable. However, this denotes many black-box algorithms sufficiently and notes that such algorithms might not be a patentable matter. Section 112 might raise patent problems because the section requires the invention’s written note (Pasquale, 2015). While the court has not tested this issue, it is arguable how well a person may define an opaque algorithm and how extensive the ensuing protection might be.

Secrecy or trade secrecy appears a common solution. However, secrecy features its problems. On the same note, the law of trade secret protects from the adoption of data, that individuals consider a secret, and gets its secrecy’s commercial value. Using secrecy seems the best method of protecting an algorithm that seems hard to understand or opaque. Besides, businesses wishing to secure their venture in the algorithm’s development can protect the data generated from an algorithm, the validation process of the algorithm, and the approach by which it was developed. Organizations that are creating black-box algorithms often depend on that secrecy (Hsu & Elmore, 2019). Although secrecy denotes an operative strategy linked to intellectual property, it runs hastily into the issues outlined earlier regarding safety, regulation, and malpractice. Here, insurers, doctors, and patients might likely be reluctant to approve medical algorithms with a mysterious algorithm operation. Some might not be willing to accept such algorithms where the method of testing and creation seems a mystery. If third parties take part in ensuring algorithmic validity and quality, it lacks certainty how they may complete such estimations without the primary data. The dependency of system creators on trade privacy reverberates other earlier incidents where most data remains a secret. Therefore, such experiences imply that similar fights may emerge over algorithmic information access.

Yet, research does not clarify whether businesses will invest adequately if they lack intellectual property spurs to keep black-box treatment incentives. Research does not clarify how the government may foster this innovation while guaranteeing its efficiency and safety. Such uncertainties will or are demanding for the AI development in health care.



Privacy issues feature via the deployment and development of black-box medicine. Confidentiality is significant in various extents. For example, privacy is important in collecting immense medical data amounts to design systems and when distributing such information to manage them. Following this further, algorithm pioneers should gather information from different sources to guide machine learning algorithms. Such data and information about the practical performance of the algorithms can then be distributed to other individuals in the medical sector for the intent of validation and evaluation. In all instances, patient-based data privacy marks a major problem, particularly as the HIPAA’S (Health Insurance Portability and Accountability Act’s) Rule of Discretion. The rule restricts and governs the revelation and the deployment of protected data by covered entities (Price & Nicholson, 2017). The protected information represents most individually discernible health data, while covered entities include health data clearinghouses, healthcare providers, relevant business associates, and health insurers. More so, HIPAA forms a multifaceted set of restricted and permitted applications of protected medical data (Price & Nicholson, 2017). Remarkably, the Privacy Rule does not govern de-identified information, although the Rule presents unique issues about information collection and the prospect of reidentification. On the same note, big data aggregators and uncovered entities such as apple or Google do not collect de-identified information. Directing the Privacy Rule of HIPAA and addressing or managing the privacy issues linked to parties whose information is applied across black-box medicine presents a different current series of likely legal issues.

Black-box treatment can change the health care system and is progressing fast to reshape the industry. Some black-box algorithms linked to health care are already operational in consumer-based phone applications. Others might come in medical care. Yet, the legal concerns associated with the implementation and development of artificial intelligence algorithms, which people never understand, are considerable. As noted here, patient privacy, regulation, intellectual property, and legal causes of action including product liability and medical malpractice influence the way of developing and deploying black-box medicine. Therefore, as a result, this medicine may reshape the manner people address those concerns in the current medical context. Scholars should determine whether entity-based privacy regulation is relevant in an industry where big data collections are useful and necessary. They should also evaluate whether the law of intellectual property needs to establish new approaches to determine the primacy of the changing algorithms’ nature and health information. Health care pioneers should clarify whether the legal policy associated with the “learned intermediate” should submit to the acknowledgment that health attendants cannot completely grasp their regular-practice technologies or the decisions these tools help make when they lack relevant data. Last, experts in the health care industry should expound whether the FDA needs to adjust its way of moderating emerging medical innovations as artificial intelligence software gains eminence. With the expansion and advancement of black-box medication, the need to address these legal concerns, and the need to logically train briefs who appreciate them contextually, might continue growing.

Share this Post