Note: I started this article at the end of 2024. It provided a backdrop of a transitional political landscape in Canada that will create a wider disparity as AI advances. Since the emergence of the new U.S. Republican Administration, the world has been thrown into chaos. In Canada, the sweeping implications at home are already being felt.
Canada stands at a critical juncture in artificial intelligence. In 2024, the Canadian government proposed a $2.4 billion investment to secure Canada’s advantage as a global AI powerhouse. AI adoption is expected to have a significant impact on Canada’s economy. Google’s Economic Impact Report predicts that AI could boost Canada’s economy by $230 billion and saves the average worker 175 hours per year.
However, despite this surge of investment, a troubling pattern has emerged: new technologies are reinforcing and amplifying existing social and economic gaps rather than closing them. The escalating geopolitical events south of the border compound AI uncertainties, while the upcoming political transition at home will test Canada’s mettle in the coming years.
Michelle Baldwin, former senior advisor of transformation at Community Foundations of Canada points out, “Among the country’s 170,000 nonprofits–organizations meant to serve communities in need–only 7% have adopted AI tools, signaling a fundamental disconnect between technological progress and social benefit.
“The space is moving so fast in a context where there was already poor literacy outside of professional circles,” warns Renee Black, Founding Executive Director of GoodBot Society. She points to a dangerous trend where each new technological advancement creates additional barriers for already struggling communities. This concern is echoed by Natiea Vinson, Chief Executive Officer of the First Nations Technology Council (FNTC), who identifies how AI systems built on Western frameworks often fail to serve Indigenous community needs.
These voices from Canada’s social sector paint a picture of a country where AI advancement that could be beneficial in addressing inequality instead threatens to create a two-tiered society that risks exacerbating inequities at scale, with one tier equipped with the latest AI tools and capabilities, while another is increasingly left behind by systems they cannot access, influence, or benefit from. The pattern extends beyond simple access to technology, touching fundamental questions about who shapes these systems, who benefits from them, and who bears their costs.
The Layered Effect: How Technology Amplifies Social Inequities
Black describes this as a layered problem, where “every inequality that existed before risks being exacerbated by a new layer of technology.” She reveals how technology acts not in isolation, but builds upon pre-existing flaws in digital infrastructure, risking deepening divides if tools are built without consideration of existing inequalities. Black parallels this to a Jenga tower, which teeters as each strategic block is removed, and how these layered dependencies become much more vulnerable.
The issue starts with basic infrastructure gaps but extends to more subtle forms of exclusion. For remote communities, the lack of reliable broadband creates an immediate barrier to participation in the digital economy, and becomes more severe as essential services increasingly move online.
For social media platforms, the drive for engagement creates a troubling cycle. Black explains, “This optimization for engagement generally favors content that triggers strong emotional responses, including harmful, hateful and polarizing material.” User engagement translates to attention which translates to profit, creating a system that unfairly rewards polarizing content, regardless of the societal impacts it creates.
These problems are only compounded by AI. Rather than correcting these issues, AI systems built on top of platforms often automate and amplify problematic patterns. As Black puts it, “unless you address some of the fundamental problems around behaviors that are incentivized by a particular tool or a particular model,” new technological layers will simply reinforce existing problems.
This “tech stack” of inequality, as Black describes it, extends beyond social media to affect access to technology skills and infrastructure, creating what she calls a pipeline effect. Each new technological advancement builds upon previous disparities which, in some cases, make it increasingly difficult for disadvantaged groups to catch up without intentional intervention.
The result is a system where technological progress can become a multiplier of existing social and economic gaps. This amplification effect, Black warns, risks continuing unless we fundamentally rethink how new technologies are developed and implemented, with attention to their social impacts and the incentives they create. She adds, “It is possible to ‘skip’ layers. In Africa, they skipped a lot of the wired infrastructure for phones and jumped to satellite. There are opportunities in coming from behind, however they require intentional cultivation.”
The Growing Digital Divide
Canada’s expanding digital divide manifests through interconnected challenges that affect communities across Canada. At its core, the gap between urban and rural access to digital infrastructure creates foundational inequities. This imbalance creates what Black describes as “compounded disadvantages,” where communities lacking basic digital infrastructure fall increasingly behind as AI technologies become essential for education, healthcare, and economic opportunity.
The challenge extends beyond physical infrastructure into organizational capacity and resources. Historical underinvestment in digital public infrastructure, particularly in the nonprofit sector, has created a persistent cycle of technological poverty. Black points out that the “lack of resources generally for nonprofits” creates barriers to technology adoption and is further complicated by burdensome and strict funding mechanisms for investing in infrastructure.
The knowledge gap is another key barrier. Black highlights that” low literacy outside professional circles,” create multiple and compounding challenges. Community organizations often struggle to understand AI capabilities and risks, while organizations serving vulnerable populations lack technical expertise. Many groups hesitate to adopt new technologies due to negative experiences and distrust of the private sector, especially today. A lack of inclusive digital literacy strategy perpetuates this cycle of low adoption and innovation.
Vinson describes a pattern of “digital colonialism” where AI systems extract value from communities without returning meaningful benefits. This includes widespread data collection without community consent, AI models that remain inaccessible to the communities whose data trained them, commercial exploitation of cultural and social information, and limited community input in AI development and deployment.
Data sovereignty has emerged as a critical concern, as Baldwin explains these groups have demanded the right to manage, protect, and benefit from their own data and as she puts it, “AI has the power to either uphold these rights by respecting self-determination or violate them when systems are designed without direct community input.”
Vinson notes that existing AI systems prioritize dominant, Western-centric perspectives, often marginalizing Indigenous worldviews, values, and oral traditions, critical to cultural preservation, adding, “These systems rarely incorporate or respect Indigenous knowledge frameworks. If an LLM generates content about governance but excludes Indigenous approaches like consensus-based decision-making or clan systems, it perpetuates the erasure of these practices.”
Baldwin emphasizes how power imbalances in AI development contribute to these issues. When underserved groups have “little to no input in how AI tools are designed, deployed, and funded,” the resulting solutions often fail to address their needs or, worse, perpetuate existing biases. Black points out that AI frequently fails, adding, “It can see patterns but not always purpose or context. It can see what you engage with but not why or whether it advances your aspirations.”
The financial divide in Canada’s social sector unveils this paradox: “While Canada’s 11,000 foundations collectively hold $135 billion in assets and distribute $11 billion annually,” according to Baldwin, these substantial resources rarely translate into meaningful technological support for underserved communities. Black emphasizes this disconnect, noting how funding restrictions often prevent nonprofits from making necessary technology investments, even when foundations have significant resources at their disposal.
Vinson identifies a more subtle but concerning trend, “AI systems frequently oversimplify or misinterpret complex cultural practices and knowledge systems,” she explains, “because they are built on linear, reductionist logic.” This technological reductionism strips away critical context from cultural knowledge, leading to misrepresentation and the diminishment of diverse worldviews in digital spaces.
The resource demands of AI technology create another layer of exclusion. While wealthy institutions invest in sophisticated AI tools to enhance their productivity, organizations serving marginalized communities struggle to access even basic AI capabilities. As Black observes, this gap extends beyond financial resources into expertise, infrastructure, and the ability to shape how technologies are developed.
Baldwin adds that privacy vulnerabilities have emerged as a particular concern for underserved populations. The nonprofit sector, often lacking robust cybersecurity measures, becomes an unwitting conduit for data breaches that can disproportionately impact vulnerable communities. This risk increases as organizations feel pressured to adopt AI tools without adequate resources for security measures.
Perhaps most concerning is what Baldwin describes as the “erosion of human judgment” in decision-making processes. As algorithms increasingly shape everything from resource distribution to policy recommendations, communities with limited technological access lose their voice in decisions that directly affect their lives. This problem is compounded by a well-resourced lobbying machine that aims to advance industry-friendly policy.
“Technology is useful as an instructional tool,” Black notes, “but unless we are intentional, we could be moving toward a future in which public institutions, as we know them, are replaced almost entirely by private intermediaries. We need to ask ourselves carefully if this is the future we want.” The cumulative effect of these current and emerging harms is creating what Vinson calls a “compound disadvantage,” where each new application of AI risks pushing vulnerable communities further from the benefits of digital advancement while subjecting them to its increasing risks. Without intervention, this pattern threatens to create an increasingly unbridgeable gap between those who can harness AI’s potential and those who bear its costs.
Canada In The Crosshairs: How U.S. Tech Deregulation Could Reshape Innovation, Data Privacy, and Society
South of the border, a newly appointed administration and a conservative agenda, called Project 2025 risks exacerbating digital disparities while reshaping data and policy sovereignty. As Vox reported,
“Trump has in fact promised massive tax cuts for billionaires — but it leaves out the deeper, darker forces at work here. For the tech bros — or as some say, the broligarchs — this is about much more than just maintaining and growing their riches. It’s about ideology. An ideology inspired by science fiction and fantasy. An ideology that says they are supermen, and supermen should not be subject to rules, because they’re doing something incredibly important: remaking the world in their image.”
This plan to reshape the U.S. government and implement far-reaching policy changes across multiple sectors, with emphasis on deregulation could dramatically reshape the tech sector. From the recission of Biden’s AI Executive Order 14110, which imposed safety requirements and oversight on AI development, to relaxed cryptocurrency regulations, to the reversal of Obama-era net neutrality regulations, to the eased anti-trust investigations that reduced scrutiny on big tech companies, to the executive order that could jeopardize the EU-US Data Privacy Framework, that currently enable transatlantic data flows in compliance with the EU General Data Protection and Regulations–collectively, these changes favor U.S. tech giants, at the expense of societal and environmental concerns in the U.S., in Canada and around the world.
At home, Canada faces the impending political changing of the guard amid a tariff war with the U.S. which will have compounding effects from the policy shifts under a second Trump administration.
The relaxation of AI regulations in the U.S. could put Canadian AI and tech startups at a competitive disadvantage. While Canada maintains some minimal oversight mechanisms to ensure ethical AI development, U.S. companies may accelerate innovation with fewer safeguards which could make it harder for responsible AI companies to compete on a global scale.
The regulatory divergence between Canada and the U.S. could also create new data privacy challenges. If the U.S. weakens its data protection laws, Canadian businesses handling sensitive information may struggle to maintain compliance when engaging in cross-border operations. The potential rollback of the EU-U.S. Data Privacy Framework could also disrupt transatlantic data flows, impacting Canadian businesses that rely on international data transfers.
The weakening of antitrust oversight and consumer protection in the U.S. could further consolidate power among major tech corporations. If U.S. regulators reduce scrutiny on big tech mergers and monopolistic behavior, smaller Canadian startups may face even greater challenges in market entry and fundraising. This could stifle innovation and limit opportunities for Canadian entrepreneurs in the tech sector.
Political Crossroads: AI Equity At Risk Under Conservative Leadership
Should a Conservative majority government take power in 2025, the roadmap for emerging technologies could pivot overnight. Black recognizes how technology policy has become an increasingly politicized, and warns this can compromise policies that protect and serve public interest. However, she stresses that all parties should be concerned about these issues and calls for dialogue with all political parties to understand and address the long-term implications of policy options, including the failure to adopt meaningful and rights-respecting policy.
Black highlights the importance of developing sovereign Canadian technology policy and fostering domestic innovation, while also strengthening cooperation with other like-minded jurisdictions. Even now, Canadian tech companies are frequently absorbed by large U.S.-based companies, and she suggests policy reform can strengthen safety and sovereignty while also enabling domestic innovation.
As Baldwin warns, political shifts can fundamentally alter how public funds flow, often favoring short-term gains over socially responsible innovation. For communities already struggling with digital access, these changes can have profound and lasting consequences.
The most immediate threat, as FNTC’s Vinson warns, is that “conservative governments often prioritize austerity measures and cuts to social spending. Funding for Indigenous digital infrastructure, AI research, and digital literacy programs may be deprioritized or eliminated.” These cuts could disproportionately affect communities already facing significant digital divides, potentially losing access to essential resources for engaging with AI and emerging technologies.
Infrastructure commitments face a particular risk. Vinson emphasizes how “commitments to broadband expansion in rural and remote Indigenous communities could be rolled back, further exacerbating disparities in access to technology.” Meanwhile, Baldwin notes that economic imperatives could overshadow ethical considerations, risking further entrenchment of inequities for marginalized communities.
Reconciliation efforts could also suffer. Vinson points out that under conservative leadership, “reconciliation may be deprioritized, with less emphasis on incorporating Indigenous voices and values into AI governance or tech policy.” She warns that “efforts to align AI systems with Indigenous worldviews may lose momentum,” while government-supported initiatives like FNTC could face reduced support, hindering progress on Indigenous-led tech solutions.
However, sector leaders are developing proactive strategies to sustain momentum regardless of political shifts. Baldwin emphasizes the importance of “building relationships across political parties and listening to understand their perspectives and platforms.” This approach focuses on finding alignment in shared issues and opportunities across all parties, making digital equity a non-partisan priority.
Optimism In 2025?
As Canada shapes its AI future, the challenges of bridging social divides while advancing technology require careful attention and deliberate action. AI development, without proper guidance, risks amplifying and entrenching existing inequities rather than resolving them.
The remarks from Vinson, Black and Baldwin emphasize Canada’s technological success should be measured by its ability to serve all communities, not just those already well-resourced. Moving forward demands fostering innovation while protecting vulnerable populations, growing the economy while strengthening social programs, and advancing technology while preserving cultural values.
Black and Baldwin both stress the importance of coalition building, where nonprofits and philanthropy unite with ethical tech innovators to protect vulnerable communities, even if federal priorities change. They advocate for raising awareness and building collective capacities through community dialogues and media engagement to hold policymakers accountable for the social impacts of AI policy inaction.
The challenge lies not just in protecting existing initiatives but in fundamentally reshaping how Canada approaches AI development, towards a trustworthy digital future regardless of political party. Black is optimistic, proposing that there is space to a shift the current “race to the bottom” to a “race to the top” – one focused on sovereignty, trust and well-being. She suggests there is appetite and momentum for alternative visions of digital futures centered on trust which could give Canada a unique competitive advantage especially in the face of rising uncertainty south of our border. Getting there requires collaboration.