Ethical, Social, Sustainability, and Regulatory Challenges in Facial Recognition Technology: A Professional Evaluation

 

 

 

Ethical, Social, Sustainability, and Regulatory Challenges in Facial Recognition Technology: A Professional Evaluation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Contents

Abstract 3

1.   Contextual Background. 3

1.1.    Technological Advancements Driving Adoption. 3

1.2.    Ethical and Privacy Issues. 4

1.3.    Environmental Footprint and Regulatory Scrutiny. 4

1.4.    Restoring people’s faith in FRT. 5

2.   Critical Analysis of Issues. 5

2.1.    Ethical Issues. 5

2.1.1.    Bias and Discrimination: 5

2.1.2.    Informed Consent and Transparency: 5

2.2.    Social Issues. 6

2.2.1.    Surveillance and Public Behavior: 6

2.2.2.    Digital Inequality: 6

2.3.    Sustainability Issues. 7

2.4.    Environmental Impact: 7

2.4.1.    Industry Initiatives: 7

2.5.    Regulatory Issues. 8

2.5.1.    Global Disparities in Regulation: 8

2.6.    Case Example: 8

3.   Creative Problem-Solving Solutions. 8

3.1.    Bias Mitigation. 8

3.2.    Enhancing Sustainability. 9

3.3.    Inclusive Design. 10

3.4.    Strengthening Regulatory Frameworks. 10

4.   Conclusion and Recommendations. 11

4.1.    Recommendations: 12

References. 14

 

 

 

 

Abstract

Facial recognition technology (FRT) is one of the most interesting examples of how artificial intelligence (AI) could be the answer to some of the most pressing issues in retail, security, and even public safety. But as every silver lining has a dark cloud, it is easily ascertainable that FRT is not devoid of social, ethical, sustainability and regulatory concerns. This report substantiates evaluating these questions and deviating towards solutions founded in professional practices and morals. Support for this perspective comes from providing both the practice and appropriate recommendations to help control policy formulation and effective deployment of FRT.

1.   Contextual Background

Ever since the 1960’s era, the development of facial recognition technology (FRT) has been nothing short of exemplary. Early founders like Woody Bledsoe set the ball rolling by formulating algorithms that allowed earlier systems to record facial features manually (Matulionyte and Zalnieriute, 2024). Additionally, With AI and CNN’s coming to the forefront, FRT has managed to reach astonishing levels of accuracy, allowing products of numerous industries to incorporate it. Nowadays, it is becoming commonplace to use facial recognition in smartphones such as apple phones with face id’s, state security/low enforcement, and even in service centres (Khanam et al., 2024).

1.1.        Technological Advancements Driving Adoption

The widespread implementation of FRT across different sectors is determined by its potential to improve security and enhance processes. Biometric access through facial recognition technology (FRT) has been integrated into cell phones making it easier for users to access devices and applications (Ross et al., 2023). In the context of public security, law enforcement agencies have been using body-worn cameras that contain FRT for instantaneous recognition (Fontes and Perrone, 2021). The increase of the church’s market can be seen in its adoption rate in the society, analysts have indicated that the market is expected to balloon especially with the growth of AI and machine learning capabilities (Conference, 2023).

1.2.        Ethical and Privacy Issues

Even after considering the noticeable advantages Paint recognition systems and facial recognition technology FRT it raises certain concerns of a more ethical nature mostly based on the principles of equal protection and non-discrimination. This makes it evident that Algorithms exhibit higher error rates for women and darker skinned individuals, a claim that was substantiated by research conducted by the MIT Media Lab during law enforcement (Leslie, 2020). FRT alarmingly proposes racial and gender issues, as algorithmic bias will stem from imbalanced training datasets that do not reflect reality(Falk et al., 2021) depopulating the true representation of people’s lives(Díaz-Rodríguez et al., 2023). Furthermore, the employment of such technologies without the explicit consent of users as in the case of Clearview AI has caused public outrage and sanctions in the form of fines under the GDPR legislation (Pat Kelly, 2022).

To deal with such issues, it is crucial to provide transparency and accountability. Some ethical principles, such as the ACM Code of Ethics, state that AI systems should be developed and used in a fair and inclusive manner (Díaz-Rodríguez et al., 2023). However, such compliance is difficult to achieve because of conflicting business interests.

1.3.        Environmental Footprint and Regulatory Scrutiny

The environmental impact of FRT is also very challenging. The use of FRT models throughout their life cycle requires significant training and squadrons that lead to an increase in the carbon footprint with values amounting to several cars usage throughout their lifetime (Kortli et al., 2020). Also, the rapid changes in technology worsen this situation since turn over increases electronic waste and contributes to environmental sustainability (Parajuly et al., 2022)

The ethical aspects concerning the FRT as well as the issue of public trust are essential and cannot be overlooked whilst dealing with the challenges of FRT. The frameworks such as the EU’s GDPR would ensure that such matters are regulated accordingly particularly in regard to the use of biometrics since the GDPR in itself would increase compliance measures (Seun Solomon Bakare et al., 2024). On the other hand, such approaches are not universal and vary from country to country. For instance, in Europe this is the case, however, the EU is quite the exception, as in the U.S. there are minimal regulations on FRT compliance measures meaning that there is more room for growth (Evora, 2024). Such discrepancies create a lot of challenges for multinational enterprises in order to be compliant as the law is not uniform across board but rather regionally based.

1.4.        Restoring people’s faith in FRT

In order to restore public trust in FRT there needs to be a focused strategy to deal with ethics, social issues, and the environment. There are positive Initiatives such as IBM’s AI Fairness 360 and Google’s enhancing focus on data centers that can assist in the realization of that trust (Johnson, 2024). The various parties’ opinions coupled with cross border contributions would affirm that the FRT is locally relevant and therefore deployable to appropriate regions.

2.   Critical Analysis of Issues

2.1.        Ethical Issues

2.1.1.   Bias and Discrimination:

Algorithms of facial recognition technology (FRT) continue to be skewed and biased in favor of the dominant classes – to the detriment of marginalized populations. The results of research done at the MIT Media Lab provoked outrage when it was established that women and individuals of darker skins were even more disadvantaged with error rates being higher that women and individuals of darker skins- these concerns seem to be important in many areas including law enforcement (Díaz-Rodríguez et al., 2023). For example, Amazon’s Rekognition system was deleting individuals from minorities on the basis of vague training datasets (Haber, 2023). Such scenarios contribute to bias, discrimination, and lack of faith in FRT systems, all which requires critics to come swiftly by way of an algorithmic audit and inclusion of better training datasets (Díaz-Rodríguez et al., 2023).

2.1.2.   Informed Consent and Transparency:

This is one critical concern, as FRT systems function without acquired consent leading to violation of privacy. The case of Clearview AI where billions of pictures of people’s faces were collected from sites were users never consented has faced criticism globally further driving home the point about the failure of opaque systems (Saluja and Douglas, 2023). This is important as guidelines that are ethical like the ACM Code of Ethics allow advocates to know what they can or cannot do where data is in question collection and processing for that matter (Wang et al., 2024). However, the successful application of these principles is difficult and complicated by the issue of conflicting commercial interests.

2.2.        Social Issues

2.2.1.   Surveillance and Public Behavior:

The adoption of Facial Recognition Technology (FRT) in public areas is quite controversial, especially its implications on privacy and entire society. When FRT is extensively deployed, for example, it results in surveillance capitalism where people do ... -willing several images (Wang et al.,2024) Such fear creates assessment on democracy and the ideals of freedom that people are cognizant that they are under surveillance and therefore tend to behave differently to such environments. For example, The City of San Francisco’s decision to stop using FRT for several months was attributed to national anxiety about the implications of being watched all the time (Patel and Monterey, 2023). These concerns point to the undeniable relevance of effective regulation and supervision to enhance public confidence and ensure FRT is used responsibly.

2.2.2.   Digital Inequality:

The recognition of a more acute concern of digital inequality is brought to light by the disproportionate distribution of benefits of FRT. While nations in the developed world suffer from these excessive commodifying technologies and gaining security and convenience, the disadvantaged are left out and or socially worse off. There is evidence that many of the FRT systems do not recognize people-a trait of the user of the technology- in the non-Western world- this is widening the digital gap and aggravating the social gap which exists(Patel and Monterey, 2023). It is also true that the training of FRT algorithms are done using datasets which are not diverse making the algorithms to have more errors on individuals who belong to the lower spectrum which is further alienated(Saluja and Douglas, 2023). Bridging these gaps is a multifaceted approach that seeks to advocate for the inclusion of a class of minorities in the development and used of FRT so that all classes are able to benefit from and have the same results.

2.3.        Sustainability Issues

2.4.        Environmental Impact:

The environmental problem of FRT has been a hot topic recently. In Hsueh (2020), it is pointed out that a FRT model alone can emit 626,000 pounds which roughly indicates the total emissions released by several civilians throughout their life. What is enhanced due to this technology turnover is the amount of e waste produced. FRT systems, which rely on outdated hardware, are harmful to the environment, emphasizing the need for innovative development methods (Adjabi et al., 2020) With the growing usage of FRT, it is important to control the overall environmental impacts of the technology and overall confirm with the sustainability visions across the globe.

2.4.1.   Industry Initiatives:

Many organizations especially Google and Microsoft are making efforts to minimize the harm caused to the environment due to FRT technology. Google and Microsoft’s pursuit towards energy efficient data centers, which are supported by green energy, are worthwhile for the sector (Ewim et al., 2023). One of the barriers that they faced was climate change caused by increasing energy usage but they offset this by optimizing AI to reduce energy usage by 40% which makes FRT booster friendly(Zamponi and Barbierato, 2022). However, these initiatives still have a long way to go. There is a dire need to implement greater changes on a widespread scale so that it is more common to come across a sustainable practice rather than the other way around.

 

 

2.5.        Regulatory Issues

2.5.1.   Global Disparities in Regulation:

Different regions regulate FRT in different ways, which complicates the framework. General Data Protection Regulation (“GDPR”) is a regulation in EU that imposes strict rules on the use of personal data including biometric information starting from the consent of the holder to the amount of data necessary to be given in the first place and the amount of information that is to be collected (Papers, Management and Development, 2023). On the other hand the US has a more hands-off policy where little regulation is imposed as the main goal is to facilitate the maximum level of innovation, this leads to problems for global firms trying to comply with these differences (Almeida, Shmarko and Lomas, 2022). The downside of such regulatory diversity is that good governance practices are hard to implement in practice and tendency for misconduct is increased.

2.6.        Case Example:

The Clearview AI case emphasizes the necessity of existence of strong legal frameworks. The company paid a fine of €30.5 million under the GDPR for collecting data without appropriate authorisation passes as an eye opener on the consequences one can suffer when there is ignorance towards privacy regulations (Izaguirre, 2024). Also this case should be considered as an alert to the interested parties, in particular stating how harmful consequences can be from not conforming to laws, one can even get to the same position as stakeholders. These differences can be resolved using a worldwide framework such as the EU AI Act. While different stakeholders can enact these measures and limit the amount of potential harm caused while encouraging new innovations such as FRT being used in a more promising way (Almeida, Shmarko and Lomas, 2022).

3.   Creative Problem-Solving Solutions

3.1.        Bias Mitigation

The existence of bias in facial recognition technology (FRT) can be problematic from both social and ethical perspectives, particularly with regards to minority groups. AI Fairness 360 solutions developed by IBM are able to provide practical help, eliminating and correcting those biases in training data sets which imbue these tools with unbalance (Johnson, 2024). This public list of resources allows practitioners to measure degree of fairness of the machine learning exercises and make necessary change to achieve better fairness in the results. For instance, application of AI Fairness 360 in employment systems has been able to cut down on inequalities within the population groups showing its wider use (Chinta et al., 2024).

Besides, it is also important to promote diversity in development teams. Some studies have shown that teams with people from different backgrounds are likely to spot and correct more biases in the design of algorithms (Díaz-Rodríguez et al., 2023). Moreover, by adopting the views of different demographic groups, the developers were able to design systems that are more natural and less artificial (Bano, Zowghi and Gervasi, 2024). This work is also supported by joint cooperation of researchers, businesses and minority communities, which will help to ensure that FRT matures properly for all peoples.

3.2.        Enhancing Sustainability

The environmental impact of FRT is quite large scale, related to the high energy utilization as well as e-waste produced. One of the best possible answers is to optimize energy consumption within algorithms. For example, Google data center AI optimized cooling systems have reduced energy consumption by 40% showcasing AI potential for aiding sustainability (Zamponi and Barbierato, 2022). Such measures can be taken by the entire sector in order to lower the carbon footprints.

The approaches would include use of renewable sources of energy for operation of the data centers. For instance, Microsoft has pledged to be carbon negative by 2030 through use of solar and wind electricity to replace the emissions caused by FRT infrastructure (Stocker et al, 2024). This way, the advancement of technology can actually engage with the preservation of nature.

Moreover, modular hardware designing is also key in the reduction of e-waste. With the changing FRT systems, they only have to be upgraded instead of replacing the whole system thus extending the lifespan. As a model for sustainable hardware in the FRT context, Dell and HP have been the first to market modular computing systems (Adjabi et al., 2020). Such initiatives coupled with collaboration from other manufacturers can greatly reduce the possible harm to nature by FRT expansion.

3.3.        Inclusive Design

All stakeholders should be involved in all stages of the design or implementation of the FRT systems. The most suitable approach could be to foster diversity during the design stages while designing for privacy. Including the voices of underrepresented communities is a crucial aspect of this. In a study commissioned by the UN Cultural Agency (UNESCO), this need for designing with diverse communities in mind is of vital importance but often overlooked.

An additional step would be developing datasets that are balanced geographically, bringing variety in styles, geometry, and ethnicities to the training set. Cross-cultural performance of automated systems specially for facial recognition faces several challenges, owing to the bias in datasets, algorithms, and non-representative training data (Heeks 2021). This will ensure that the FRT systems are efficient and credible in most parts of the world and do not discriminate against non-Western countries.

Lastly, many are still divided on the ethical considerations that relate to the usage of data collected from the public in the development of AI. Such technologies have massive implications on individual privacy and ethics, this is exacerbated by lack of clarity regarding when and how such data would be used in the future. Advocacy appears to be shifting from ensuring inclusivity in design and focussing on ethical practices by being transparent.

3.4.        Strengthening Regulatory Frameworks

Regulatory measures depend on local density but a common denominator in the form of FRT laws that cover biometric technology will best handle the differences in governance. Western areas should not look far as the European Union AI Act outlines clear cut measures outlining regimes for AI systems on the basis of risk and issues absolute rules for the use of face recognition technologies for biometric purposes (Virtosu and Li, no date). Their criteria of equity, responsibility, and openness set a good example for other regions, as these principles should govern the use of new technologies everywhere.

The desire to create a common set of standards, including for AI technologies, is also present, and institutes such as the United Nations provide a proper focus to strive for. At the same time, when determining the ethical standards for the development of AI, UNESCO considers it critical to measure inventions against basic rights for which fundamental principles should not be breached (Gill and Germann, 2022).

In this regard, practice evidences the necessity of conforming to the rules. In this light, one of the most significant examples is the penalty of 30.5 million Euro imposed on Clearview AI for unauthorized data harvesting in the EU region and other GDPR jurisdictions. This case explains a business opportunity cost of falling out of the set rules and thus motivates all stakeholders to adopt specific policies to avoid such a scenario.

Policy makers need to involve academia, industry and civil society to improve the effectiveness of the regulation. This approach guarantees that rules are workable and effective, addressing the complex and multi-dimensional nature of FRT governance (Wang et al., 2024). Stakeholders can engage in activities that foster trust and develop a common global perspective and framework that allows for responsible development activities.

4.   Conclusion and Recommendations

Facial recognition technology (FRT) promises to be revolutionary, changing areas such as security, healthcare and customer service. However, the deployment of it, should be done with responsibility and in accordance with ethical, social, sustainability and regulatory requirements. There is a need to deal with such challenges in order to foster public confidence and strive for just outcomes which are to the advantage of the society at large.

Regarding the challenges in the implementation of FRT, the major one is likely the ethical bias in the algorithms. Biases in the datasets lead to discrimination against certain groups and populations, studies (Almeida, Shmarko and Lomas, 2022) are already showing. Likewise, issues of e-waste and energy usage expansion are an urgent question. Adding to these challenges and the absence of effective international frameworks, international practice is faced with the imperative of finding practical means.

4.1.        Recommendations:

1.    Bias Mitigation: The use of inclusive datasets together with regular algorithmic audits is essential in reducing bias risk in FRT system. IBM’s AI fairness tool 360 is one of the instruments developed to assist in overcoming such inequalities during the process (Johnson, 2024). Involving people from different ethnic categories in the design processes also helps to minimize systemic imbalances as the systems are designed from a more demographic perspective (Gupta et al., 2023).

2.    Sustainability Integration: In order to lessen the environmental risk, it is important to implement energy-efficient strategies. Organizations like Google for instance have shown that it is possible to have AI optimized cooling systems in data centers, thus cutting down the usage of energy (Zamponi and Barbierato, 2022). Modular hardware architectures can increase the lifetime of systems thereby reducing e-waste and enhancing sustainable practices of deployment (Adjabi et al., 2020).

3.    Designing for All: It equips underrepresented communities with tools to develop systems that are fair and culturally appropriate. The example of UNESCO’s AI for Social Good initiative is illustrative of how engaging more stakeholders ensures inclusivity in AI uses (Moon 2023). Ensuring the audience is aware of how their information will be utilized also builds trust and deepens the relationship (Olateju et al. 2024).

4.    Regulatory Synchronization at the Global Level: In the world today, there are differences in practices and laws concerning AI in different countries. The AI Act of the EU comes in handy as a comprehensive structure around governance of risk in a way that is fair, accountable and transparent (Gawande and Kumar 2023). Global standards that endorse responsible growth without infringing on fundamental freedoms can come about due to mutual engagement of stakeholders including, policymakers, industry, and academia.

With these actions taken, FRT may become a powerful instrument that is ethical, enhances eco-sensitivity and earns public confidence. These challenges have to be addressed proactively to ensure that FRTs do not only reduce risks but that it is used to advance progress in humanity.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Adjabi, I. et al. (2020) ‘Past, present, and future of face recognition: A review’, Electronics (Switzerland), 9(8), pp. 1–53. Available at: https://doi.org/10.3390/electronics9081188.

Almeida, D., Shmarko, K. and Lomas, E. (2022) ‘The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks’, AI and Ethics, 2(3), pp. 377–387. Available at: https://doi.org/10.1007/s43681-021-00077-w.

Bano, M., Zowghi, D. and Gervasi, V. (2024) ‘A Vision for Operationalising Diversity and Inclusion in AI’, Proceedings - 2024 IEEE/ACM International Workshop on Responsible AI Engineering, RAIE 2024, pp. 36–43. Available at: https://doi.org/10.1145/3643691.3648587.

Chinta, S.V. et al. (2024) ‘FairAIED: Navigating Fairness, Bias, and Ethics in Educational AI Applications’, pp. 1–47. Available at: http://arxiv.org/abs/2407.18745.

Conference, P. (2023) THE MODERN VECTOR OF.

Díaz-Rodríguez, N. et al. (2023) ‘Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation’, Information Fusion, 99(May), p. 101896. Available at: https://doi.org/10.1016/j.inffus.2023.101896.

Evora, U. De (2024) ‘Mestrado em Rela¸ c˜ oes Internacionais e Estudos Europeus’.

Ewim, D.R.E. et al. (2023) ‘Impact of Data Centers on Climate Change: A Review of Energy Efficient Strategies’, The Journal of Engineering and Exact Sciences, 9(6), pp. 16397–01e. Available at: https://doi.org/10.18540/jcecvl9iss6pp16397-01e.

Fontes, C. and Perrone, C. (2021) ‘Ethics of surveillance: harnessing the use of live facial recognition technologies in public spaces for law enforcement’, Institute for Ethics in Artificial Intelligence, (December), pp. 1–11. Available at: https://ieai.mcts.tum.de/.

Gawande, A. and Kumar, A. (2023) Enhancing Productivity in Hybrid Mode The Beginning of a New Era. Available at: https://doi.org/10.5281/zenodo.8096542.

Gill, A.S. and Germann, S. (2022) ‘Conceptual and normative approaches to AI governance for a global digital ecosystem supportive of the UN Sustainable Development Goals (SDGs)’, AI and Ethics, 2(2), pp. 293–301. Available at: https://doi.org/10.1007/s43681-021-00058-z.

Haber, E. (no date) ‘Acial ecognition’, 2023 [Preprint].

Heeks, R. (2022) ‘Digital inequality beyond the digital divide: conceptualizing adverse digital incorporation in the global South’, Information Technology for Development, 28(4), pp. 688–704. Available at: https://doi.org/10.1080/02681102.2022.2068492.

Hsueh, G. (2020) ‘Carbon Footprint of Machine Learning Algorithms Carbon Footprint of Machine Learning Algorithms A Senior Project submitted to’.

Izaguirre, B.A. (2024) ‘Former aide to 2 New York governors is charged with being an agent of the Chinese government A2 UP FRONT Former aide to 2 New York governors is charged with being an agent of the Chinese government’.

Johnson, S. (2024) ‘CREATING SEAMLESS OMNICHANNEL EXPERIENCES IN ECOMMERCE : International Journal of Core Engineering & Management’, (December).

Khanam, R. et al. (2024) ‘A Comprehensive Review of Convolutional Neural Networks for Defect Detection in Industrial Applications’, IEEE Access, 12(May), pp. 94250–94295. Available at: https://doi.org/10.1109/ACCESS.2024.3425166.

Kortli, Y. et al. (2020) ‘Face recognition systems: A survey’, Sensors (Switzerland), 20(2). Available at: https://doi.org/10.3390/s20020342.

Leslie, D. (2020) ‘Understanding Bias in Facial Recognition Technologies’, SSRN Electronic Journal [Preprint]. Available at: https://doi.org/10.2139/ssrn.3705658.

Matulionyte, R. and Zalnieriute, M. (2024) ‘The Cambridge Handbook of Facial Recognition in the Modern State’, The Cambridge Handbook of Facial Recognition in the Modern State [Preprint]. Available at: https://doi.org/10.1017/9781009321211.

Moon, M.J. (2023) ‘Searching for inclusive artificial intelligence for social good: Participatory governance and policy recommendations for making AI more inclusive and benign for society’, Public Administration Review, 83(6), pp. 1496–1505. Available at: https://doi.org/10.1111/puar.13648.

Olateju, O.O. et al. (2024) ‘Exploring the Concept of Explainable AI and Developing Information Governance Standards for Enhancing Trust and Transparency in Handling Customer Data’, Journal of Engineering Research and Reports, 26(7), pp. 244–268. Available at: https://doi.org/10.9734/jerr/2024/v26i71206.

Papers, S., Management, S. and Development, R. (2023) ‘MANAGEMENT PROTECTION OF PERSONAL DATA PROCESSING’, 23(3).

Parajuly, K. et al. (2022) ‘Future e-waste scenarios’, (2019).

Pat Kelly (2022) ‘Facial Recognition Technology and the Growing Power of Artificial Intelligence. Report of the Standing Committee on Access to Information, Privacy and Ethics’, (October). Available at: www.ourcommons.ca.

Patel, S. and Monterey (2023) ‘REGULATING FEDERAL LAW ENFORCEMENT ’ S NAVAL POSTGRADUATE’.

Ross, G.M.S. et al. (2023) ‘Best practices and current implementation of emerging smartphone-based (bio)sensors – Part 1: Data handling and ethics’, TrAC - Trends in Analytical Chemistry, 158, p. 116863. Available at: https://doi.org/10.1016/j.trac.2022.116863.

Saluja, S. and Douglas, T. (2023) ‘The Implications of Using Artificial Intelligence (AI) for Facial Analysis and Recognition’, Journal of Student Research, 12(3), pp. 1–7. Available at: https://doi.org/10.47611/jsrhs.v12i3.5120.

Del Ser, J. et al. (2024) ‘On generating trustworthy counterfactual explanations’, Information Sciences, 655(November 2023), p. 119898. Available at: https://doi.org/10.1016/j.ins.2023.119898.

Seun Solomon Bakare et al. (2024) ‘Data Privacy Laws and Compliance: a Comparative Review of the Eu Gdpr and Usa Regulations’, Computer Science & IT Research Journal, 5(3), pp. 528–543. Available at: https://doi.org/10.51594/csitrj.v5i3.859.

Smith, M. and Miller, S. (2022) ‘The ethical application of biometric facial recognition technology’, AI and Society, 37(1), pp. 167–175. Available at: https://doi.org/10.1007/s00146-021-01199-9.

Stocker, V. et al. (2024) ‘ICT Sustainability Reporting Strategies of Large Tech Companies: Changes in Format, Scope, and Content’, SSRN Electronic Journal, 131(16), pp. 1–62. Available at: https://doi.org/10.2139/ssrn.4927128.

Virtosu, I. and Li, C. (no date) ‘Navigating face recognition technology : A comparative study of regulatory and ethical challenges in China and the European Union’, 2023, pp. 111–140.

Wang, X. et al. (2024) ‘Beyond surveillance: privacy, ethics, and regulations in face recognition technology’, Frontiers in Big Data, 7. Available at: https://doi.org/10.3389/fdata.2024.1337465.

Zamponi, M.E. and Barbierato, E. (2022) ‘The Dual Role of Artificial Intelligence in Developing Smart Cities’, Smart Cities, 5(2), pp. 728–755. Available at: https://doi.org/10.3390/smartcities5020038.

 

Post a Comment (0)
Previous Post Next Post