CDL Licensing Failures Expose Widespread Safety Gaps on U.S. Roads

A new data analysis conducted by Bader Law reveals extensive weaknesses in the commercial driver’s license system, showing how verification failures, training gaps, and administrative errors have allowed unsafe or improperly qualified commercial drivers to remain on the road. The findings highlight a national safety issue that affects everyday drivers far more often than many realize.

Commercial trucks move freight across every region of the country, and the CDL system is designed to ensure that only qualified drivers operate these vehicles. The study shows that when the system breaks down, the consequences extend far beyond the trucking industry and into the daily lives of millions of road users.

Fatal Crash Trends Show the Stakes

Federal crash data reviewed in the study shows that large truck and bus crashes remain a significant public safety concern.

Key findings include:

  • 4,909 deaths in 2024 in crashes involving large trucks and buses
  • 5,472 deaths in 2023, an eight percent decrease from 2022 but still historically high
  • About 70 percent of people killed in large truck crashes are occupants of other vehicles

These numbers illustrate the disproportionate risk that heavy commercial vehicles pose. Even low speed collisions involving large trucks can result in severe outcomes due to their size and weight.

Where and When Fatal Crashes Occur

The study highlights that most fatal truck crashes do not occur on major interstates.

  • 75 percent of fatal large truck crashes in 2023 occurred on non interstate roads
  • 76 percent occurred on weekdays, during peak travel hours

These findings show that the risks tied to CDL oversight failures are concentrated in everyday driving environments, not isolated to long haul freight corridors.

How the CDL System Is Designed to Work

A CDL is required for drivers operating heavy vehicles, transporting hazardous materials, or carrying passengers. The system includes several layers of oversight:

  • Knowledge and skills testing
  • Medical certification
  • Verification of identity and lawful presence
  • Entry level driver training
  • Ongoing compliance checks and roadside enforcement

When each layer functions correctly, unqualified drivers are filtered out. The study by Bader Law focuses on what happens when these layers fail or fail to communicate.

Where Licensing Breakdowns Occur

The study identifies recurring patterns in four major areas: verification, testing, training, and enforcement. These failures do not necessarily reflect individual driver misconduct. Instead, they reveal systemic weaknesses that allow improperly qualified drivers to remain licensed for months or years.

Verification Failures in Non Domiciled CDLs

One of the most persistent issues involves non domiciled CDLs, which are issued to foreign nationals who are lawfully present and authorized to work in the United States.

Audits show:

  • States issued CDLs without confirming lawful presence
  • Licenses were issued for periods far longer than the driver’s work authorization
  • Some licenses remained valid long after authorization expired

These failures undermine the requirement that non domiciled CDLs must not extend beyond the driver’s authorized stay.

Testing Integrity Failures

The study highlights a major case in Massachusetts, where a former state police sergeant was convicted on nearly 50 charges for participating in a bribery scheme that exchanged passing CDL scores for gifts.

  • At least 17 drivers received fraudulent passing scores
  • Massachusetts reported a 41 percent pass rate in 2022, meaning most applicants normally fail

This case demonstrates how testing fraud can bypass one of the most important safety filters in the CDL system.

Training Oversight Failures

Training providers must meet federal Entry Level Driver Training standards. The study found:

  • Nearly 3,000 training providers were removed from the federal registry for noncompliance
  • About 4,000 more were placed on notice for failing to meet standards

Drivers trained through noncompliant programs may hold valid CDLs while lacking required instruction.

Roadside Enforcement and Administrative Errors

Roadside inspections reveal that many violations involve administrative lapses rather than unsafe driving behavior.

Common issues include:

  • Suspended or expired licenses
  • Missing medical certificates
  • Improper documentation

These problems show gaps in real time compliance tracking.

Audit Findings Across Multiple States

State and federal audits provide some of the clearest evidence of systemic CDL oversight failures.

Audit Results by State

StateAudit Failure RateKey Findings
North Carolina54 percentMissing or unverified lawful presence documentation
New York53 percentLicenses issued without verified lawful presence
Texas49 percent123 records reviewed, leading to 6,400 license revocations
CaliforniaOver 25 percentImproper expiration dates, prompting 17,000 planned revocations

These findings show that licensing failures are not isolated to one region. Instead, they reflect structural weaknesses across multiple states.

Fatal Crashes Involving CDL Required Vehicles

The study examined fatal crashes involving vehicles requiring a CDL from 2019 through 2023.

  • 15,753 fatal crashes nationwide
  • Highest totals in:
    • Texas: 2,123
    • California: 1,146
    • Florida: 947
    • Georgia: 677

The study also identified 70 fatal crashes involving drivers who lacked proper license status at the time of the crash. While the number is small relative to the total, it shows that licensing failures can intersect with fatal outcomes.

English Proficiency Enforcement Trends

Federal rules require CDL holders to understand and communicate in English. The study found:

  • About 3.8 percent of CDL holders, or 130,000 to 140,000 drivers, are classified as limited English proficient
  • Since June 2025, enforcement agencies issued 23,000 citations for English language deficiencies

These citations are concentrated in Texas, Wyoming, Tennessee, Arizona, and Florida.

Labor Pressures and Policy Shifts

The study places CDL oversight failures within the broader context of the trucking labor market.

Foreign Born Drivers in the Workforce

  • 18 to 19 percent of U.S. truck drivers are foreign born
  • This equals roughly 650,000 drivers
  • Non domiciled CDL holders make up about 5 percent of all CDL drivers

States like California rely heavily on foreign born drivers, who make up nearly half of the trucking workforce.

Regulatory Changes Affecting Employment

A recent federal rule titled “Restoring Integrity to the Issuance of Non Domiciled Commercial Driver’s Licenses” restricts CDL issuance for certain immigrant groups, including refugees and asylees.

  • The study estimates 194,000 drivers may eventually lose their jobs due to this rule

Second Chance Hiring and Shadow Fleets

To address shortages, the industry has expanded second chance hiring programs. Research shows stable employment can reduce recidivism by more than 50 percent.

The study also notes:

  • Over 190,000 drivers are listed as prohibited in the Drug and Alcohol Clearinghouse
  • 62 percent have not begun the return to duty process

This creates a shadow fleet of drivers who exit regulated trucking rather than reenter compliance.

What the Data Shows

The study by Bader Law concludes that CDL safety depends heavily on administrative accuracy and consistent enforcement. The data does not support claims that any demographic group is inherently unsafe. Instead, the findings show that licensing failures are institutional and systemic.

When verification steps are skipped, when training oversight lapses, or when expiration dates are misaligned, unqualified drivers can legally operate heavy commercial vehicles. The study argues that strengthening the CDL system is essential for protecting everyone who shares the road.

Daily writing prompt
How often do you say “no” to things that would interfere with your goals?

Political Developments in the Age of Artificial Intelligence

Milind Harsh Sardar

M.A. Political Science

Indira Gandhi National Open University, New Delhi.

Email: milindsardar100@gmail.com  

Abstract

Artificial intelligence is rapidly transforming political institutions and public life. The central research problem of this research article is to examine how AI reshapes governance structures, civil liberties, electoral politics, economic distribution and geopolitical competition. While AI promises efficiency and innovation, it also raises concerns about accountability, bias, surveillance and democratic legitimacy. The study seeks to understand how different political systems respond to these opportunities and risks. The research adopts a qualitative comparative methodology. It draws on secondary sources including academic literature, policy documents and institutional reports. The analysis compares democratic and authoritarian contexts to identify patterns of institutional adaptation. Thematic analysis is used to examine governance transformation, surveillance expansion, digital political communication, labour market restructuring and regulatory frameworks. The study concludes that the political consequences of artificial intelligence will depend not only on technological capacity but also on deliberate policy choices and institutional resilience.

Keywords: Artificial intelligence, digital governance, algorithmic accountability, electoral politics, surveillance and privacy, geopolitical competition

Introduction

Artificial intelligence is transforming political life across the globe. It shapes governance, public debate and state power. AI systems process data, identify patterns and generate predictions. These systems are embedded in administration and strategy. Governments rely on them. Corporations deploy them. Citizens interact with them daily. Politics can no longer be studied without examining artificial intelligence. Political authority depends on information. AI changes how information is gathered, analysed and applied. Decision making becomes faster. Surveillance becomes broader. Communication becomes more targeted. These shifts alter relations between citizens and institutions. They redefine accountability and transparency.

Artificial intelligence also redistributes power. Actors who control data and computational capacity gain influence. States invest in AI for economic growth and security. Private firms shape political communication through algorithmic platforms. Civil society reacts to risks linked to bias and privacy. The political sphere is therefore deeply intertwined with technological change. This research paper examines political developments in the age of artificial intelligence. It evaluates governance transformation, electoral change, surveillance expansion, geopolitical rivalry and regulatory responses. The study uses qualitative comparative analysis. It argues that AI amplifies existing political structures while introducing new institutional tensions.

Literature Review

Scholars increasingly study artificial intelligence within political science. Early research focused on digital governance. Later work expanded toward surveillance capitalism, algorithmic bias and geopolitical competition. The literature highlights both opportunities and risks. One strand emphasizes efficiency in public administration. AI can process vast datasets quickly. Public agencies use predictive analytics in health, taxation and infrastructure planning. Researchers argue that such tools improve responsiveness and reduce waste. Administrative modernization is often framed as a benefit of technological integration.

Another strand highlights inequality and bias. Algorithms learn from historical data. Historical data often reflects discrimination. Automated systems can therefore reproduce injustice. Studies show disparities in predictive policing and welfare algorithms. These outcomes raise ethical and legal concerns. Scholars call for fairness audits and transparency mandates. Surveillance studies form another important body of literature. AI enables large scale monitoring of faces, voices and behaviours. Some scholars describe a shift toward data driven governance. Surveillance can suppress dissent and chill political expression. Even democratic states face pressure to balance security and privacy.

Research on elections and communication also expands rapidly. Campaigns use machine learning to target voters. Social media platforms employ recommendation algorithms that shape exposure to information. Personalized messaging may mobilize supporters. It may also fragment public discourse. Polarization can intensify when citizens receive different streams of political content. International relations scholars analyse AI competition among states. AI is framed as a strategic asset. It influences military modernization and intelligence gathering. Some warn of an arms race dynamic. Others emphasize cooperation and norm building. The debate continues regarding autonomous weapons and global governance frameworks. Despite growing scholarship, gaps remain. Comparative studies across regime types are limited. Long term institutional impacts are still emerging. More systematic analysis is required to connect governance, rights and geopolitical change.

Methodology

This study adopts a qualitative comparative research design to examine political developments in the age of artificial intelligence. The objective is to analyse how AI influences governance, elections, civil liberties and international relations across different political systems. The research does not rely on primary field surveys or experimental data. Instead, it draws on secondary sources including peer reviewed journal articles, academic books, policy papers and institutional reports. This approach allows for comprehensive synthesis of existing knowledge.

The study uses purposive case selection. Examples are chosen from both democratic and authoritarian contexts to highlight institutional variation. Democratic systems are examined for their regulatory frameworks, public accountability mechanisms and electoral practices involving AI. Authoritarian systems are analysed for patterns of surveillance expansion, centralized control and limited transparency. The comparative structure enables identification of similarities and contrasts in political outcomes.

Analysis and Discussion

  • Governance Transformation and Administrative Power

Artificial intelligence has reshaped public administration. Governments now use algorithmic systems to manage routine tasks. These tasks include processing applications, detecting fraud and forecasting service demand. AI increases speed. It reduces operational costs. Officials often justify adoption in terms of efficiency and modernization. The structure of bureaucratic authority is changing. Traditional administration relies on hierarchical decision making. Written rules guide officials. AI introduces automated decision pathways. These pathways depend on statistical models. They do not rely on direct human judgment. Civil servants supervise these systems. Yet many do not fully understand their internal logic. A knowledge gap emerges within institutions. Technical experts gain influence. Private contractors often design and maintain systems. Administrative power shifts toward those who control data and code.

Transparency becomes more complex. Democratic governance depends on explainable decisions. When an algorithm denies welfare benefits citizens expect justification. When predictive tools flag individuals for investigation people seek reasons. Many AI systems function as black boxes. Their reasoning processes are difficult to interpret. This opacity limits public oversight. It can weaken trust in government institutions. Accountability also changes. If a human official makes an error responsibility is identifiable. If an algorithm produces harm blame becomes diffuse. Officials may claim they relied on technical outputs. Developers may argue that systems function as designed. This diffusion complicates legal remedies. Citizens may struggle to challenge decisions effectively. Courts face difficulties evaluating technical evidence.

Bias remains a central concern. AI systems learn from historical data. Historical data often reflects social inequality. If past policies discriminated the algorithm may reproduce similar outcomes. Predictive policing tools may target marginalized neighbourhoods. Welfare screening systems may disproportionately flag vulnerable populations. These outcomes generate political controversy. Advocacy groups demand fairness audits and independent review. Administrative discretion is also altered. Algorithms standardize decisions. Standardization can reduce arbitrary treatment. It can also reduce flexibility. Human officials sometimes consider context and compassion. Automated systems rely on predefined variables. Unique circumstances may not be captured in data fields. This rigidity affects perceptions of justice.

Despite these concerns AI offers real benefits. Data driven planning can improve public health responses. Resource allocation can become more precise. Infrastructure management can become more efficient. Crisis response can be faster when predictive models are available. The challenge lies in balancing innovation with democratic safeguards. Governance transformation in the AI era is therefore not purely technical. It is political. It reshapes authority, accountability and citizen state relations. Institutions must adapt deliberately. Transparent oversight and human supervision remain essential to preserve democratic legitimacy.

  • Surveillance Expansion and Civil Liberties

Artificial intelligence has greatly expanded the surveillance capacity of modern states. AI systems can process vast amounts of data in real time. They analyse video feeds, online communication and biometric information. Facial recognition technology can identify individuals in crowded public spaces. Voice recognition systems can match speech patterns to specific persons. Data aggregation tools combine information from multiple sources. These capabilities create unprecedented monitoring power. In authoritarian systems such technologies strengthen centralized control. Continuous monitoring reduces space for dissent. Citizens may fear that online comments or physical participation in protests will be recorded. This fear can produce self-censorship. Political opposition becomes riskier. Over time surveillance normalizes obedience. The state gains informational dominance over society.

Democratic states also use AI surveillance tools. Governments justify them through national security and crime prevention concerns. Predictive policing systems attempt to forecast where crimes may occur. Border control agencies use biometric databases. Intelligence services analyse digital communication patterns. These measures are often defended as necessary for public safety. However, they raise serious civil liberty questions. Privacy is directly affected. AI systems collect and process personal data at large scale. Individuals may not know what data is stored or how it is used. Consent becomes abstract when surveillance is embedded in public infrastructure. Mass data collection can create detailed behavioural profiles. Such profiles can reveal political preferences, associations and personal habits.

Legal safeguards vary widely across political systems. Strong judicial oversight can limit misuse. Independent data protection authorities can impose standards. Transparency requirements can increase accountability. Where these institutions are weak surveillance may expand without constraint. Emergency powers can further justify intrusive monitoring. The expansion of AI surveillance therefore transforms the balance between security and freedom. Technological capability often advances faster than legal regulation. Without deliberate policy design civil liberties may erode gradually. Protecting democratic rights requires continuous oversight, clear legal boundaries and active civic engagement in debates about surveillance and state power.

  • Electoral Politics and Digital Communication

Artificial intelligence has transformed electoral politics and digital communication. Political campaigns now rely heavily on data analytics and machine learning. These tools help identify voter preferences and behavioural patterns. Campaign strategists use predictive models to determine which voters are persuadable. Resources are allocated based on algorithmic assessments. This increases efficiency and strategic precision. Microtargeting is a central development. Campaigns deliver tailored messages to specific demographic groups. Different voters receive different versions of political appeals. Messages are crafted to resonate with personal interests and concerns. This personalization can increase engagement and turnout. Voters may feel that candidates understand their needs. Political communication becomes more direct and customized.

However, microtargeting also fragments the public sphere. Citizens no longer receive the same political messages. Shared national debates become segmented. Public discourse may lose common reference points. This fragmentation can weaken democratic deliberation. When groups consume different information mutual understanding declines. Polarization can intensify as communities form around distinct narratives. Social media platforms amplify these dynamics. Recommendation algorithms prioritize content that generates engagement. Emotional or controversial posts often receive greater visibility. Political actors adapt their strategies accordingly. Campaigns design content to trigger strong reactions. Sensational messages can spread faster than balanced analysis. This creates incentives for dramatic rhetoric over thoughtful discussion.

Artificial intelligence also contributes to misinformation risks. Automated bots can simulate human users. They can spread political content at scale. Deepfake technology enables the creation of synthetic audio and video. Fabricated media can damage reputations or mislead voters. Verification often lags behind distribution. Trust in electoral integrity may suffer as a result. Regulatory responses remain uneven. Some governments require disclosure of online political advertising. Others invest in digital literacy programs. Platforms develop detection systems to identify coordinated manipulation. Yet technological innovation often moves faster than policy reform. Electoral politics in the AI era therefore reflects both opportunity and vulnerability. Democratic systems must adapt to protect transparency, fairness and informed participation in a rapidly evolving digital environment.

  • Economic Redistribution and Labor Politics

Artificial intelligence is transforming labour markets and reshaping debates about economic redistribution. Automation powered by machine learning replaces certain routine and repetitive tasks. Manufacturing, transportation and administrative support roles face significant disruption. Workers in these sectors may experience job displacement or wage stagnation. At the same time new positions emerge in data science, software engineering and AI system maintenance. These new roles often require advanced technical skills. The gap between high skill and low skill employment can widen. This structural change influences political alignments. Workers who feel economically insecure may demand stronger social protection. They may support parties that promise redistribution or labour safeguards. Economic anxiety can fuel populist movements. Political rhetoric often frames automation as a threat to national employment. Governments face pressure to respond with targeted policies.

Retraining and education programs become central to policy agendas. States invest in digital literacy and technical training initiatives. Lifelong learning frameworks gain attention as career paths become less stable. Yet retraining programs require funding and institutional capacity. Not all workers can easily transition into high skill sectors. Geographic and socioeconomic barriers persist. This uneven adaptation deepens regional inequality. Debates about income distribution also intensify. Some policymakers propose taxing large technology firms that benefit from automation. Others advocate universal basic income as a response to potential job loss. These proposals reflect broader ideological divisions about the role of the state in managing market outcomes. Fiscal policy becomes a site of contestation linked directly to AI driven economic change.

Labor unions confront new challenges. Traditional collective bargaining models may not address platform-based work or gig economies. Algorithmic management in workplaces can monitor productivity and influence scheduling. Workers may feel reduced autonomy under data driven oversight. Political responses must consider both technological efficiency and worker dignity. Artificial intelligence therefore reshapes labour politics in structural ways. It alters employment patterns, redistributes economic power and stimulates policy innovation. The political consequences depend on how governments manage transition. Effective redistribution strategies and inclusive growth policies can reduce tension. Failure to address inequality may intensify polarization and social unrest.

  • Geopolitical Rivalry and Strategic Competition

Artificial intelligence has become a central arena of geopolitical rivalry. Major powers view AI leadership as a source of economic strength and military advantage. Governments invest heavily in research, semiconductor production and advanced computing infrastructure. National strategies emphasize innovation, talent development and technological sovereignty. Competition over AI capacity is now linked to broader struggles for global influence. Military applications intensify this rivalry. AI supports intelligence analysis, logistics planning and autonomous systems. Autonomous weapons raise serious ethical and strategic concerns. Delegating lethal decisions to machines challenges established norms of warfare. Some states advocate international regulation or prohibition. Others argue that strategic deterrence requires continued development. The absence of binding global agreements increases uncertainty.

Technology supply chains have also become politicized. States impose export controls on advanced chips and software. Restrictions aim to limit rival access to critical components. Alliances form around shared technological standards and secure supply networks. These measures reflect fears of dependency and espionage. AI driven cyber capabilities further complicate relations. States use machine learning to enhance cyber defence and offense. Cyber operations can disrupt infrastructure and influence public opinion. Attribution remains difficult. This ambiguity heightens mistrust among competing powers.

Despite rivalry, limited cooperation persists. Multilateral forums discuss ethical principles and risk reduction. Confidence building measures are proposed to prevent escalation. However strategic competition remains the dominant trend. Artificial intelligence is thus reshaping the global balance of power and redefining the contours of international politics.

  • Regulatory Responses and Normative Debate

The expansion of artificial intelligence has forced governments to respond. Policymakers face complex choices. AI promotes innovation and economic growth. It also creates risks for privacy, equality and democracy. Regulation has therefore become a central political issue. Different states adopt different approaches. Some governments introduce comprehensive legislation. They classify AI systems by level of risk. High risk systems face strict obligations. These obligations include transparency, documentation and human oversight. Impact assessments are often required. This model emphasizes precaution. It treats AI governance as a matter of rights protection. Other governments prefer flexible strategies. They promote ethical guidelines instead of binding laws. Industry self-regulation is encouraged. Innovation and competitiveness are prioritized. Supporters argue that strict rules may slow technological progress. Critics respond that voluntary standards lack enforcement. Without penalties harmful practices may continue.

Normative debate focuses on legitimacy. Democratic theory values accountable human decision making. Algorithmic governance introduces automated processes into public administration. When systems determine welfare eligibility or risk assessment questions arise. Who is responsible for errors. Who can challenge outcomes. These issues affect democratic trust. Human oversight is widely discussed. Many scholars argue that AI should assist rather than replace human judgment. Sensitive decisions require review by accountable officials. Automation without supervision risks injustice. Oversight mechanisms must be clearly defined.

Transparency is another core concern. Citizens must understand how decisions are made. Explainable AI becomes a policy goal. Yet complex machine learning models are difficult to interpret. Governments must balance disclosure with protection of intellectual property. This tension complicates reform efforts.

International coordination remains limited. AI technologies cross borders easily. Data flows ignore national boundaries. Fragmented regulation creates loopholes. Multilateral forums attempt dialogue on standards and ethics. Progress is gradual and uneven. Regulatory responses therefore reflect deeper political values. States must balance innovation with democratic safeguards. The outcome of this debate will shape the future relationship between technology and public authority.

Conclusion and Recommendations

Artificial intelligence has become a defining force in contemporary politics. It reshapes governance, surveillance, elections, labour markets and international relations. Administrative systems now rely on data driven tools. Political campaigns use algorithmic targeting. States expand monitoring capacity through advanced analytics. Global competition increasingly centres on technological leadership. These developments demonstrate that AI is not only a technical innovation. It is a structural political transformation. The analysis shows that AI amplifies existing power dynamics. In democratic systems it can improve efficiency and service delivery. It can also weaken transparency if oversight is insufficient. In authoritarian contexts AI strengthens centralized control and limits dissent. Electoral politics becomes more strategic yet more fragmented. Economic change intensifies debates about redistribution and labour protection. Geopolitical rivalry grows as states compete for dominance in research and infrastructure.

The central challenge lies in governance. Technological capability often advances faster than regulation. Without clear safeguards civil liberties may erode gradually. Accountability becomes diffuse when algorithms shape public decisions. Democratic legitimacy depends on visible human responsibility. Institutions must therefore adapt deliberately rather than reactively. Several recommendations follow from this analysis. First, governments should establish clear legal frameworks for high-risk AI systems. Transparency requirements and independent audits are essential. Citizens must have the right to explanation and appeal. Second, strong data protection laws should safeguard privacy. Surveillance tools must operate under judicial oversight and defined limits. Third, investment in digital literacy should expand. An informed public is better equipped to resist manipulation and misinformation.

Fourth, labour market policies must address economic displacement. Retraining programs and social protection measures can reduce inequality. Policymakers should ensure that benefits of AI innovation are broadly shared. Fifth, international dialogue on autonomous weapons and cross border data governance should continue. Cooperative norms can reduce destabilizing competition. Artificial intelligence will continue to evolve. Political institutions must remain flexible and vigilant. The future of democracy and global stability depends on how societies govern this transformative technology.

References

  1. Fadia, B. L., & Fadia, K. (2020). Indian government and politics (15th ed.). Sahitya Bhawan Publications.
  2. Government of India, Ministry of Electronics and Information Technology. (2021). Responsible AI for all: Strategy document.
  3. Government of India, NITI Aayog. (2018). National strategy for artificial intelligence #AIforAll.
  4. Johari, J. C. (2019). Indian political system (6th ed.). Anmol Publications.
  5. Kashyap, S. C. (2018). Our constitution: An introduction to India’s constitution and constitutional law (3rd ed.). National Book Trust.
  6. Laxmikanth, M. (2022). Indian polity (6th ed.). McGraw Hill Education.
  7. Singh, M. P., & Roy, H. (2018). Indian political system (4th ed.). Pearson India.
  8. Ananthakrishnan, G. (2025, March 11). ‘Can generate fake case citations’: Top court judge flags AI concerns. The Indian Express. https://indianexpress.com/article/india/can-generate-fake-case-citations-top-court-judge-flags-ai-concerns-9879733/
  9. Damini Nath. (2024, October 24). Centre to launch AI-powered chatbot to handle public grievances soon. The Indian Express. https://indianexpress.com/article/india/centre-to-launch-ai-powered-chatbot-to-handle-public-grievances-soon-9636447/
  10. Mishra, N. C. (2024, January 4). The politics and geopolitics of AI governance. The Indian Express. https://indianexpress.com/article/opinion/columns/the-politics-and-geopolitics-of-ai-governance-9094938/
Daily writing prompt
How often do you say “no” to things that would interfere with your goals?