<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.brownsbookshelf.ca/blogs/Uncategorized/feed" rel="self" type="application/rss+xml"/><title>Brown's Bookshelf - Blog , Uncategorized</title><description>Brown's Bookshelf - Blog , Uncategorized</description><link>https://www.brownsbookshelf.ca/blogs/Uncategorized</link><lastBuildDate>Thu, 18 Dec 2025 03:49:17 -0500</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[284 Days of Silence]]></title><link>https://www.brownsbookshelf.ca/blogs/post/284-days-of-silence</link><description><![CDATA[When major data breaches make headlines, attention almost always focuses on the same question: how did the attackers get in? In the case of the BCBS-Co ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_nT44XPP1QSCghkNa8SSopw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_y8JBNOZJTAuBNMnQ1ERvsg" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_9Pp7Rd3aTfuvPqEGqIB76g" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_U6qjw90KTNizS6TN0CnzBA" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><span><span style="font-weight:700;">The Real Risk Exposed by the BCBS–Conduent Breach</span></span></span></h2></div>
<div data-element-id="elm_8av5kloiQp6ek_leRQH6hA" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p><span><span></span></span></p><p></p><p></p><p></p><p style="text-align:left;margin-bottom:12pt;">When major data breaches make headlines, attention almost always focuses on the same question: how did the attackers get in?</p><p style="text-align:left;margin-bottom:12pt;">In the case of the BCBS-Conduent breach, that question matters. It is simply not the most important one.</p><p style="text-align:left;margin-bottom:12pt;">What matters more is this. Two hundred and eighty-four days passed between Conduent’s discovery of the incident and public disclosure.</p><p style="text-align:left;margin-bottom:12pt;">In environments handling protected health information and sensitive personal data, delays of that length are rarely explained by technical complexity alone. More often, they expose something deeper: a disconnect between documented compliance controls and the operational reality of incident detection, scoping, and response. This article is not an accusation. It is an examination of what extended silence tends to signal across industries, across years, and across many well-documented breaches.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">The Incident, Briefly and Factually</span></h5><p style="text-align:left;margin-bottom:12pt;">Conduent is a large business services provider that supports claims processing and related functions for multiple healthcare organizations, including Blue Cross Blue Shield entities. In that role, it operates as a business associate under HIPAA, with access to substantial volumes of protected health information and personally identifiable information.</p><p style="text-align:left;margin-bottom:12pt;">Public reporting and regulatory filings indicate that Conduent discovered unauthorized activity in January 2025. Notifications to affected entities, regulators, and the public began in October 2025. The elapsed time between discovery and disclosure was two hundred and eighty-four days. The exposed data included both PII and PHI, which triggered statutory notification and compliance obligations.</p><p style="text-align:left;margin-bottom:12pt;">What Conduent has not publicly detailed is also notable. There has been no public accounting of the precise attack vector, the detection method, dwell time, how scope was determined, or why disclosure took more than nine months. That silence is not proof of wrongdoing. It is, however, informative.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">Why Disclosure Timing Matters More Than Entry Vector</span></h5><p style="text-align:left;margin-bottom:12pt;">Modern security programs assume breaches will occur. The differentiator is not whether an organization is compromised, but how it responds once it knows. Incident response maturity is revealed across three related timelines: how quickly an organization detects an incident, how effectively it scopes the impact, and how promptly it communicates that information to stakeholders.</p><p style="text-align:left;margin-bottom:12pt;">Of these, disclosure timing is often the most revealing because it sits at the intersection of security operations, legal and regulatory obligations, executive decision-making, and governance maturity. A short disclosure window usually indicates that an organization can detect incidents reliably, maintains accurate data inventories, can scope affected systems and data with confidence, and has pre-established authority to make notification decisions. Extended delays suggest the opposite. They typically reflect organizational uncertainty rather than malicious intent.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">What Long Delays Usually Indicate, Without Speculation</span></h5><p style="text-align:left;margin-bottom:12pt;">Across multiple industries and decades of breach analysis, prolonged disclosure delays tend to correlate with a consistent set of structural problems. Organizations often delay disclosure because they cannot confidently answer which systems were accessed, what data was exposed, or which customers or patients were affected. This uncertainty commonly reflects weak logging, fragmented data ownership, or incomplete asset inventories. These are governance failures, not purely technical ones.</p><p style="text-align:left;margin-bottom:12pt;">Long delays also tend to surface a second issue: compliance controls that exist primarily on paper. Many organizations maintain documented incident response and breach notification procedures that assume ideal conditions, including complete visibility, clear ownership, and accurate system inventories. When reality diverges from documentation, timelines stretch.</p><p style="text-align:left;margin-bottom:12pt;">A third factor is legal and regulatory gating. Extended silence often reflects internal tension between security teams pushing for notification, legal teams seeking certainty, and executives concerned about liability and reputational risk. Strong governance resolves these tensions quickly. Weak governance allows them to stall response.</p><p style="text-align:left;margin-bottom:12pt;">Finally, third-party risk frequently compounds delay. When breaches occur at vendors or business associates, organizations often discover that notification timelines are vague, oversight into the vendor’s detection capabilities is limited, and security assurances were accepted without independent validation. In healthcare, where data aggregation is extreme, this risk is magnified.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">This Pattern Is Not Unique to Conduent</span></h5><p style="text-align:left;margin-bottom:12pt;">The BCBS–Conduent timeline fits a broader historical pattern. Equifax disclosed its 2017 breach roughly forty days after discovery and later faced regulator findings that cited deficiencies in its security program. Marriott’s Starwood breach involved an eighty-three-day disclosure delay that regulators tied to longstanding data governance failures following acquisition. Yahoo’s breach history involved years-long delays that were later linked to internal knowledge and disclosure breakdowns. Uber concealed its 2016 breach for approximately a year, a decision later cited explicitly as a governance and compliance failure.</p><p style="text-align:left;margin-bottom:12pt;">By contrast, organizations that disclosed promptly, such as Target, Capital One, and Anthem, still suffered security failures. The difference was not technical sophistication. It was process transparency. Prompt disclosure does not mean strong security. Delayed disclosure often signals weak governance.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">Compliance Versus Operational Reality</span></h5><p style="text-align:left;margin-bottom:12pt;">Compliance frameworks such as HIPAA, ISO 27001, SOC 2, and NIST all emphasize incident response and breach notification. Frameworks, however, do not investigate breaches. People do.</p><p style="text-align:left;margin-bottom:12pt;">When auditors review controls, they see policies. When incidents occur, reality intervenes. Extended disclosure delays often reveal that data flows are poorly understood, system ownership is fragmented, exceptions have accumulated without documentation, third-party assurances were taken at face value, and incident response plans assumed capabilities that did not exist. In short, compliance maturity on paper exceeded operational maturity in practice.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">What “Good” Would Have Looked Like</span></h5><p style="text-align:left;margin-bottom:12pt;">A mature response to a breach involving protected health information typically involves rapid detection through monitoring, early containment even before full scoping is complete, staged disclosure beginning with regulators and upstream partners, transparent updates as scope becomes clearer, and clear articulation of uncertainty rather than prolonged silence. This is not easy. It is, however, achievable, and many organizations demonstrate it regularly.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">Why Regulators Focus on Process, Not Perfection</span></h5><p style="text-align:left;margin-bottom:12pt;">Regulators do not expect zero breaches. They expect reasonable safeguards, timely detection, honest disclosure, and evidence that governance structures function under stress. A delay of two hundred and eighty-four days inevitably raises questions. Those questions are not about whether controls existed, but whether they worked when it mattered.</p><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">The Real Lesson of the 284 Days</span></h5><p style="text-align:left;margin-bottom:12pt;">The most important takeaway from the BCBS-Conduent breach is not technical. It is organizational. Silence of that length usually means an organization is struggling to reconcile what happened with what its controls said should have happened. That struggle is the true risk. It is also one that audits, certifications, and attestations often fail to detect until an incident forces the issue.</p><h3 style="text-align:left;margin-bottom:4pt;"></h3><h5 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:normal;">Final Thought</span></h5><p style="text-align:left;margin-bottom:12pt;"><span>Breach headlines come and go. Timelines endure. When nearly nine months pass between discovery and disclosure in a regulated environment, the story is no longer about attackers, vectors, or social engineering. It is about governance, and whether it functions when it matters most.</span></p><div style="text-align:left;"><br/></div><h2 style="text-align:left;margin-bottom:4pt;"><span style="font-size:16px;"><br/></span></h2><h2 style="text-align:left;margin-bottom:4pt;"><span style="font-size:16px;"><br/></span></h2><h2 style="text-align:left;margin-bottom:4pt;"><span style="font-size:16px;"><br/></span></h2><h2 style="text-align:left;margin-bottom:4pt;"><span style="font-size:16px;">Sources and References</span></h2><p style="text-align:left;margin-bottom:12pt;"><span style="font-size:16px;">Information regarding the Conduent incident timeline and affected populations is drawn from U.S. state attorney general breach notification portals, including publicly filed notices and correspondence submitted in 2025, as well as reporting by HIPAA Journal and Cybersecurity Dive on the Conduent healthcare breach disclosures.</span></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-size:16px;">Historical breach timelines and regulatory findings referenced in this article rely on primary disclosures and regulator reports, including Equifax public statements and findings by the Office of the Privacy Commissioner of Canada, Marriott International breach notifications and Canadian privacy regulator reports related to the Starwood acquisition, U.S. Senate Commerce Committee findings regarding the Target breach, Capital One public disclosures and regulatory filings, Anthem disclosure statements filed with state insurance regulators, and Federal Trade Commission enforcement actions related to Uber’s 2016 breach.</span></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-size:16px;">Additional context on disclosure obligations and governance expectations is informed by published guidance from the U.S. Department of Health and Human Services on HIPAA breach notification requirements, SEC cybersecurity disclosure rules for public companies, and NIST incident response guidance.</span></p><div><br/></div><p style="text-align:left;margin-bottom:12pt;"></p><p style="margin-bottom:12pt;"></p><p></p><p></p><div><br/></div><p></p><p></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-size:16px;"></span></p><p></p></div>
</div><div data-element-id="elm_ce3AvxwlSpq50KYvTWnpeA" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 15 Dec 2025 07:36:11 -0500</pubDate></item><item><title><![CDATA[Power Hungry]]></title><link>https://www.brownsbookshelf.ca/blogs/post/power-hungry</link><description><![CDATA[If you hang out on tech Twitter, Threads, or LinkedIn long enough, you’ll eventually see the same refrain: “AI is an environmental disaster. These mode ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_OCsyHy2qQnSvJYUks9me3Q" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_7HqlErIHQ6Gcp_TBBijUKQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_gD8hVfM1Qc2ACwUtH8U0xg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_djGI9jD3T-O2gpkBvnex7w" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><span style="font-weight:400;">Is AI Really the Energy Villain?</span></span></h2></div>
<div data-element-id="elm_WGbEz3y0SSG22hToa_vhkg" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><h2 style="text-align:left;margin-bottom:6pt;"><br/></h2><p style="text-align:left;"><span>If you hang out on tech Twitter, Threads, or LinkedIn long enough, you’ll eventually see the same refrain:</span></p><p style="text-align:left;"><span>“AI is an environmental disaster. These models are boiling the oceans.”</span></p><div style="text-align:left;">Is AI energy-hungry? Absolutely.</div><span><div style="text-align:left;">Is it the main thing dragging the grid to its knees? Not yet. And if we’re going to have a grown-up conversation about power use, we need to stop pretending AI runs on its own private electricity.</div></span><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">First, zoom out: data centres vs the whole grid</span></h3><p style="text-align:left;"><span>According to the International Energy Agency (IEA), data centres worldwide used about 415 terawatt-hours (TWh) of electricity in 2024, roughly 1.5% of global electricity consumption. That load is growing fast and is expected to more than double to around 945 TWh by 2030.</span></p><p style="text-align:left;"><span>So yes, the data-centre footprint is real, and AI is a big part of why those projections are climbing.</span></p><p style="text-align:left;"><span>But that 1.5% is all data-centre workloads lumped together:&nbsp;</span>social media, video streaming, gaming, SaaS, classic web hosting, plus AI training and inference</p><div style="text-align:left;"><br/></div><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">How much of that is actually AI?</span></h3><p style="text-align:left;"><span>Recent analysis reviewed in Carbon Brief suggests that AI currently accounts for roughly 5–15% of data-centre power use, with a plausible path to 35–50% by 2030 if generative AI keeps scaling as projected.</span></p><p style="text-align:left;"><span>So today, the majority of data-centre energy is still going to the “boring” stuff:&nbsp;</span>serving video, running social feeds, ad tech, cloud storage, regular enterprise workloads</p><div style="text-align:left;"><br/></div><p style="text-align:left;"><span>Think of it as the newest tenant in a building that was already over-air-conditioned.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">Streaming and social: the original energy hogs</span></h3><p style="text-align:left;"><span>This is the part that rarely makes the headlines.</span></p><p style="text-align:left;"><span>Work from researchers like Andy Masley has shown that streaming Netflix and YouTube consumes far more energy overall than services like ChatGPT, once you scale to global usage.</span><a href="https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/artificial-intelligence-and-the-environment-putting-the-numbers-into-perspective/?utm_source=chatgpt.com"><span></span></a><span>In one comparison, he estimated that annual energy use associated with ChatGPT was comparable to a small U.S. region, while video streaming as a whole matched the electricity use of all New England plus New York.</span></p><p style="text-align:left;"><span>Older IEA work on streaming puts one hour of video on a smartphone over Wi-Fi at about 0.037 kWh of electricity, most of which is in data transmission and the device, not the data centre alone.</span><span>&nbsp;that sounds small until you multiply it by billions of hours of video per day.</span></p><p style="text-align:left;"><span>On top of that, Canadian research has suggested that streaming video alone contributes over 1% of global greenhouse gas emissions, driven by sheer volume and a classic rebound effect: the easier it is to stream, the more we do it.</span></p><p style="text-align:left;"><span>So when we talk about “the internet’s energy problem,” we’re really talking about an attention economy problem:&nbsp;</span>Doomscrolling, Infinite video feeds, Autoplay everything, Cloud gaming, and increasingly, AI on top of all that</p><div style="text-align:left;"><br/></div><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">So why does AI feel like the villain?</span></h3><p style="text-align:left;"><span>Part of it is visibility. Large language models are tangible. You type a prompt, something happens, and journalists can point at a single query and say: “This uses 10× a Google search.” They’re not wrong; estimates put a typical ChatGPT query at roughly that order of magnitude.</span></p><p style="text-align:left;"><span>But most people don’t think of five hours of TikTok or Netflix as “energy use.” It’s just Tuesday.</span></p><p style="text-align:left;"><span>AI becomes the villain because it’s new, concentrated, and visible, while streaming and social are just “background noise” even though they still dominate the energy pie in absolute terms.</span></p><p style="text-align:left;"><span>If you only blame AI, you’re not doing climate policy. You’re doing narrative management.</span></p><p style="text-align:left;"><span><br/></span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">Where AI is a real grid problem</span></h3><p style="text-align:left;"><span>Now for the part where I agree with the critics.</span></p><p style="text-align:left;"><span>Even if AI is a minority slice today, it’s driving where and how fast data-centre demand grows. IEA, Nature and others all converge on the same point: data-centre electricity use is likely to at least double by 2030, largely because of AI workloads.</span></p><p style="text-align:left;"><span>BloombergNEF projects U.S. data-centre power demand reaching about 106 gigawatts by 2035, a sharp jump from earlier forecasts.</span><span>&nbsp;Pew Research estimates data centres already account for about 4% of U.S. electricity use, with demand expected to more than double by 2030.</span></p><p style="text-align:left;"><span>That doesn’t mean “AI breaks the grid globally,” but it does mean local pain:&nbsp;</span>Stressed regional grids, higher prices near hyperscale clusters, awkward conversations about who gets power priority</p><div style="text-align:left;"><span style="color:rgb(60, 65, 70);font-family:&quot;Averia Serif Libre&quot;, serif;"><br/></span></div><div style="text-align:left;"><span style="color:rgb(60, 65, 70);font-family:&quot;Averia Serif Libre&quot;, serif;">Why I keep betting on task-specific silicon</span></div><p style="text-align:left;"><span>This is where my inner infrastructure nerd and my inner pragmatist meet.</span></p><p style="text-align:left;"><span>We’ve solved this kind of problem before:</span></p><p style="text-align:left;"><span><br/></span></p><div style="text-align:left;">Video encoding moved to ASICs and specialized hardware blocks.</div><div style="text-align:left;">Crypto mining migrated from GPUs to ASICs because power costs killed everything else.</div><div style="text-align:left;">Networking offload went from “just use the CPU” to smart NICs and DPUs.</div><div style="text-align:left;"><br/></div><p style="text-align:left;"><span>Specialized AI accelerators and ASICs can deliver order-of-magnitude efficiency gains over general-purpose hardware. Some surveys put performance-per-watt improvements in the 10–50x range for certain workloads compared to classic CPUs and GPUs.</span><span>&nbsp;Google’s TPU v4 shows roughly 2.7x better performance per watt than its previous generation.&nbsp;</span><span>Startups like Positron claim their inference ASICs can beat Nvidia’s H200 systems on throughput while using about one-third of the power.</span><a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/positron-ai-says-its-atlas-accelerator-beats-nvidia-h200-on-inference-in-just-33-percent-of-the-power-delivers-280-tokens-per-second-per-user-with-llama-3-1-8b-in-2000w-envelope?utm_source=chatgpt.com"><span>&nbsp;</span></a></p><div style="text-align:left;">In plain language:</div><span><div style="text-align:left;">If we let the hardware catch up, AI’s watt-per-output story gets dramatically better.</div></span><p style="text-align:left;"><span>That doesn’t magically erase training footprints or embodied emissions, but it does mean the “AI will eat the grid” narrative is not a law of physics. It’s an engineering problem.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:18px;">So what the hell are we actually arguing about?</span></h3><p style="text-align:left;"><span>If you care about the energy impact of AI, here are the more useful questions:</span></p><div style="text-align:left;"><p>How fast can we push the transition from general-purpose GPUs to more efficient accelerators?</p><p>How do we stop building data centres in places where the grid is already on life support?</p><p>How do we account for all high-bandwidth attention platforms (video, social, gaming, AI) instead of picking a single villain of the week?</p><p>And how do we design policy that rewards genuine efficiency, not just better PR?</p><p><br/></p></div><p style="text-align:left;"><span>AI is not innocent. But it’s also not the only one at the buffet.</span></p><p style="text-align:left;"><span>If we only yell at AI while ignoring streaming, social and the rest of the modern internet stack, we’re not saving the planet. We’re just doing climate cosplay.</span></p><p style="text-align:left;"><span>And to borrow from a metaphor I used elsewhere: that would be like James Bond taking out No. 3 and ignoring the rest of SPECTRE.</span></p><p style="text-align:left;"><span><br/></span></p><p><span></span></p><div style="text-align:left;"></div><p></p><div style="text-align:left;"><span style="font-size:16px;font-style:italic;"><br/></span></div><div style="text-align:left;"><span style="font-size:16px;font-style:italic;"><br/></span></div><div style="text-align:left;"><span style="font-size:16px;font-style:italic;">Sources for information used in this article can be found at the following links -&nbsp;<br/><br/></span><div><div><span style="font-size:16px;font-style:italic;">https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai</span></div><div><span style="font-size:16px;font-style:italic;">https://www.carbonbrief.org/ai-five-charts-that-put-data-centre-energy-use-and-emissions-into-context/</span></div><div><span style="font-size:16px;font-style:italic;">https://www.iea.org/commentaries/the-carbon-footprint-of-streaming-video-fact-checking-the-headlines</span></div><div><span style="font-size:16px;font-style:italic;">https://sshrc-crsh.canada.ca/society-societe/community-communite/ifca-iac/evidence_briefs-donnees_probantes/earth_carrying_capacity-capacite_limite_terre/pdf/SSHRC%20KSG%20Evidence%20Brief_Marks%20Laura_FinalE.pdf</span></div><div><span style="font-size:16px;font-style:italic;">https://www.nature.com/articles/d41586-025-01113-z</span></div><div><span style="font-size:16px;font-style:italic;">https://about.bnef.com/insights/clean-energy/ai-and-the-power-grid-where-the-rubber-meets-the-road/</span></div><div><span style="font-size:16px;font-style:italic;">https://aimagazine.com/news/ai-data-centres-will-drive-a-165-power-demand-explained</span></div><div><span style="font-size:16px;font-style:italic;">https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-1501.pdf</span></div><div><span style="font-size:16px;font-style:italic;">https://arxiv.org/abs/2304.01433</span></div><div><span style="font-size:16px;font-style:italic;">https://www.tomshardware.com/tech-industry/artificial-intelligence/positron-ai-says-its-atlas-accelerator-beats-nvidia-h200-on-inference-in just-33-percent-of-the-power-delivers-280-tokens-per-second-per-user-with-llama-3-1-8b-in-2000w-envelope</span></div></div><br/></div></div>
</div><div data-element-id="elm_jP9djr4PS06nqDW7FgeZHg" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 06 Dec 2025 14:19:18 -0500</pubDate></item><item><title><![CDATA[Nostalgia Is Not a Security Strategy]]></title><link>https://www.brownsbookshelf.ca/blogs/post/nostalgia-is-not-a-security-strategy</link><description><![CDATA[And That Includes Windows 10...&nbsp; Every time new Windows 10 CVEs hit the wire, I can practically hear that old chorus from back in September and Oc ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_NQLnH9_6TTKfcDB5VQi7Eg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_h_PACNu-TTCrXCZGnnd2LA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_YaCdzIoCSqi63XRe43N4yA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_cqsQESEWS7ejnsYXvT0_kA" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span style="font-size:36px;">Legacy OS's are not secure.</span></h2></div>
<div data-element-id="elm_inwBCbNoQXSSgpE9fz_J5w" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p><span><span></span></span></p><h5 style="text-align:center;margin-bottom:6pt;"><span>And That Includes Windows 10...&nbsp;</span></h5><div style="text-align:left;"><span><br/></span></div><p style="margin-bottom:12pt;"></p><div style="text-align:left;"></div><p></p><span><span><p style="text-align:left;margin-bottom:12pt;"><span>Every time new Windows 10 CVEs hit the wire, I can practically hear that old chorus from back in September and October:</span></p><p style="text-align:left;margin-left:30pt;margin-right:30pt;margin-bottom:12pt;"><span style="font-style:italic;">“Relax. Windows 10 is still totally fine.”&nbsp;</span></p><p style="text-align:left;margin-left:30pt;margin-right:30pt;margin-bottom:12pt;"><span style="font-style:normal;">Or even better&nbsp;</span></p><p style="text-align:left;margin-left:30pt;margin-right:30pt;margin-bottom:12pt;"><span style="font-style:italic;">&quot;Just install Windows 7, it still works!&quot;</span></p><p style="margin-bottom:12pt;"></p><div style="text-align:left;">It took less than a month for reality to respond with a&nbsp;<span>zero-day</span>&nbsp;and 2 critical exploits: <span style="font-weight:700;">CVE-2025-62215</span>, <span style="font-weight:700;">CVE-2025-60724</span>, and <span style="font-weight:700;">CVE-2025-62199</span>.</div><span><div style="text-align:left;">Here’s what each means, and why they ought to make you wince.</div></span><p></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-weight:700;">CVE-2025-62215</span><span>: A &quot;race condition&quot; flaw in the Windows Kernel allows a local, low-privilege attacker to escalate to SYSTEM privileges (CVSS 7.0). If you’re on Windows 10, that means the OS kernel you rely on is attackable from the inside.</span><a href="https://nvd.nist.gov/vuln/detail/CVE-2025-62215?utm_source=chatgpt.com"><span>&nbsp;</span></a>Now, this one requires some skills and setup to execute, but if done correctly, the bad guys can hijack Windows built-in Super-Admin, SYSTEM.</p><p style="text-align:left;margin-bottom:12pt;"><span style="font-weight:700;">CVE-2025-60724</span><span>: A heap-based buffer overflow in Microsoft’s Graphics Component (GDI+) lets a remote attacker execute arbitrary code via specially crafted metafiles; think convincing image/document uploads that bypass user interaction. CVSS 9.8.</span><a href="https://www.absolute.com/blog/microsoft-patch-tuesday-november-2025-critical-fixes-and-urgent-updates?utm_source=chatgpt.com"><span>&nbsp;</span></a>This one is just plain dangerous.</p><p style="text-align:left;margin-bottom:12pt;"><span style="font-weight:700;">CVE-2025-62199</span><span>: A use-after-free vulnerability in Microsoft Office enables code execution when a user opens or previews a malicious file (even via the Preview Pane). CVSS 7.8, and yes, Windows 10 is in scope. Almost as dangerous as the previous CVE, only limited by its requirement of MS Outlook. The fact that it just needs to load in the Preview Pane is particularly concerning.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Three more reminders that Windows 10 is no longer being engineered for long-term safety. It’s being kept on life support out of politeness, and only through paid Extended Security Updates if you explicitly opt in.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Let’s be clear: Microsoft isn’t planning this forever. The company has shifted its security engineering, kernel hardening, mitigations, and vulnerability-research pipelines to Windows 11 and beyond. That is where the investment is. Windows 10 is receiving what can best be described as maintenance-grade patching, just enough to keep the lights on, not enough to keep pace with modern threat actors.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>This matters because attackers do not care about your comfort zone. They are not waiting for your upgrade cycle or your “I’ll deal with it next quarter” mood. They innovate continuously, and the older the OS, the more predictable its defensive posture becomes. That’s why CVEs on legacy platforms pile up like overdue library books.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Some people insist that not upgrading to Windows 11 is a principled stand against UI changes, telemetry concerns, or hardware requirements. That’s fine. Preferences are allowed, and I am the last person to interfere with anyone's convictions. But let’s stop pretending preference and security posture are the same thing. They are not. And by necessity, the latter does indeed restrict the former (and vice versa!).</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Running an operating system with declining patch velocity and an expanding vulnerability surface is not “tech skepticism.” It is a risk profile. One where you inherit all the risk, while attackers inherit all the opportunity.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>This does not mean everyone needs to love Windows 11. If you want to take the Linux plunge, switch to macOS, or hand-crank a Babbage Engine in your living room, please, be my guest. However, relying on Windows 10 in 2025 is increasingly equivalent to ignoring the smoke alarm because the beeping is annoying.</span></p><p style="margin-bottom:12pt;"></p><div style="text-align:left;">Security is not sentimental.</div><span><div style="text-align:left;">Operating systems age.</div><div style="text-align:left;">And online threats require constant vigilance.</div></span><p></p><p style="text-align:left;margin-bottom:12pt;"><span>I was a huge fan of Windows 98SE and Windows 2000 NT, but I do not pretend they are viable daily drivers in 2025.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>If you want to stay informed with more straight-talk from someone who’s been in IT long enough to have installed, used, and managed all of these systems, you know where to find me.</span></p><p style="text-align:left;margin-bottom:12pt;"><span><br/></span></p><p style="text-align:left;margin-bottom:12pt;"><span><br/></span></p><p style="text-align:left;margin-bottom:12pt;"><span><br/></span></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-size:16px;">Detailed information on the exploits discussed in this article can be found below</span></p><p style="text-align:left;margin-bottom:12pt;"><a href="https://nvd.nist.gov/vuln/detail/CVE-2025-62215?utm_source=chatgpt.com"><span style="font-size:16px;">https://nvd.nist.gov/vuln/detail/CVE-2025-62215</span></a><br/></p><p style="text-align:left;margin-bottom:12pt;"><a href="https://nvd.nist.gov/vuln/detail/CVE-2025-60724?utm_source=chatgpt.com"><span style="font-size:16px;">https://nvd.nist.gov/vuln/detail/CVE-2025-60724</span></a><br/></p><p style="text-align:left;margin-bottom:12pt;"><a href="https://nvd.nist.gov/vuln/detail/CVE-2025-62199"><span style="font-size:16px;">https://nvd.nist.gov/vuln/detail/CVE-2025-62199</span></a><br/></p><div style="text-align:left;"><span><br/></span></div></span></span><p style="text-align:left;margin-bottom:12pt;"><span></span></p><div style="text-align:left;"><span><br/></span></div><p></p></div>
</div><div data-element-id="elm_5RPt6p2xT2iAGIilexcK8w" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 24 Nov 2025 20:20:37 -0500</pubDate></item><item><title><![CDATA[The AI Strategy “Sprint” Is Not Reckless, It’s Necessary]]></title><link>https://www.brownsbookshelf.ca/blogs/post/the-ai-strategy-sprint-is-not-reckless-it-s-necessary</link><description><![CDATA[&nbsp; Recently, four academics argued in The Hill Times that the federal government’s 30-day AI Strategy Task Force is a “hallucination” driven by unw ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_D7BPF5nLRJav4tFZMkaqmQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_Jxs-6XaTSZqTrP2jSCSRXw" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_m3ix3dnPScC4JZefeLDoxA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_V6G48041SiCSeTbOsJaMpA" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span style="font-size:28px;"><span style="font-style:italic;">A counterpoint to “The Canadian government is hallucinating over its AI strategy.”,&nbsp;</span><br/><span style="font-style:italic;">​</span><span style="font-style:italic;">published by The Hill Times</span></span></h2></div>
<div data-element-id="elm_McY4ynr2QqG7CHGjwJEPKA" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p><span><span></span></span></p><p style="margin-bottom:12pt;"><span style="font-weight:700;"><br/></span><span style="font-style:italic;"><br/></span><span>&nbsp;</span></p><p style="margin-bottom:12pt;"><span></span></p><div><p style="text-align:left;"></p><div><p style="text-align:left;">Recently, four academics argued in <em>The Hill Times</em> that the federal government’s 30-day AI Strategy Task Force is a “hallucination” driven by unwarranted urgency. Their concerns about ethics and potential harms are 100% valid. However, they overlook the fundamental economic and security reality confronting Canada.</p><p style="text-align:left;">The government is not hallucinating. It is responding to clear signals that Canada risks permanent technological and economic irrelevance if we continue treating AI policy like an academic exercise. Perfect, slow-motion consultation is a luxury the country simply cannot afford.</p><p style="text-align:left;">Critics argue that a 30-day sprint limits democratic debate. But time has become sovereign in the AI race. AI is not a highway bill; it is a deeply technical, rapidly evolving domain. Expecting the general public to meaningfully contribute to foundational regulatory design under tight timelines is unrealistic. In early-stage architecture, relying on experts is not technocratic elitism; it’s the only way to build something that actually works.</p><p style="text-align:left;">Meanwhile, global powers are not waiting. The U.S., China, and even mid-sized economies like the U.K. and South Korea are moving aggressively to secure AI leadership. Every month Canada stalls is a month of capital flight, lost jobs, and deeper dependency on foreign technology.</p><p style="text-align:left;">Data sovereignty makes this urgency not just an economic necessity, but an ethical one. Without sovereign cloud, sovereign compute, and sovereign model infrastructure, our most sensitive national, commercial, and personal data will remain on foreign-controlled systems. That is a national security liability orders of magnitude greater than any short-term consultation concern.</p><p style="text-align:left;">The harms of AI bias, misinformation, and job disruption do not argue for slowing down. They demand accelerating regulatory capacity so Canada can shape AI outcomes instead of reacting to them. A sprint does not finalize everything; it establishes the foundational framework that broader public consultation can refine and expand.</p><p style="text-align:left;">This is not a choice between ethics and economics. It is a choice between establishing baseline sovereignty now and refining the details later or delaying until Canada becomes a permanent consumer, not a creator, of critical technologies.</p><p style="text-align:left;">The government’s AI strategy is not reckless. It is a necessary first move to maintain control over Canada’s digital destiny. Instead of criticizing the speed of the sprinter, let’s focus on helping guide them toward the finish line.</p></div><p style="text-align:left;"></p></div><p style="margin-bottom:12pt;"><span></span></p><div style="text-align:left;"><span><br/></span></div><br/><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Wed, 19 Nov 2025 20:09:58 -0500</pubDate></item><item><title><![CDATA[Le Contrôle Muet]]></title><link>https://www.brownsbookshelf.ca/blogs/post/le-contrôle-muet</link><description><![CDATA[Once upon a time, in a land far far away, your operating system simply ran programs. We’re now moving into an era where it observes, predicts, and “as ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_z9b8t1uBR7e-6NFobXP4ZA" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_-tsfeUvdRxyx41Z0Pyxm9A" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_R-abJBDbSvaI7hAr1Ih77A" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_AU1Ja-4UTzCkpJTCZBiGhg" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><span><span style="font-weight:700;">Privacy in the Age of AI-Enabled Operating Systems</span></span></span><br/></h2></div>
<div data-element-id="elm_SRDh6D97Qiu__ZgXWfkhFA" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p><span><span></span></span></p><p style="text-align:left;margin-bottom:12pt;"><span>Once upon a time, in a land far far away, your operating system simply ran programs. We’re now moving into an era where it observes, predicts, and “assists”. A classic, unsolicited overachiever. This is not assistance; it is unpaid, continuous, on-device training</span><span style="font-weight:700;">.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>AI is growing to become the spine of modern computing. From Windows Copilot and Apple’s “on-device intelligence” to Android’s predictive layers with Google Gemini integration. As our systems evolve from tools into observers, they quietly erode one of the last bastions of digital privacy: the OS itself. It’s like discovering your favorite armchair has a highly organized, queryable filing system for every conversation, every commercial, every shift of your butt while sitting in it.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>For decades, the OS was not just neutral ground; it was </span><span style="font-style:italic;">your</span><span> ground. In Canada (particularly Quebec), privacy laws exist to address the ‘classic’ threats, ie, an Admin account in another province with Remote Desktop access. That was obvious surveillance, a human looking over your shoulder.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>The AI-enabled OS is something else entirely. It’s not just glancing. It studies you. It maps your rhythm, your pauses, your working style. It knows you’re lying about being &quot;super busy&quot; on a Tuesday morning. Prediction is its product, and you are its input. The difference is profound: the administrator had to </span><span style="font-style:italic;">ask to be let in,</span><span> and his presence was </span><span style="font-style:italic;">explicitly logged.</span><span> The OS is already inside, running the house, and its inference logs are proprietary.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:26px;">From Explicit Access to Implicit Inference (The Quebec Conundrum)</span></h3><p style="text-align:left;margin-bottom:12pt;"><span>In Québec, strong laws exist to manage explicit data transfers. </span><span style="font-style:italic;">Law 25 (Loi 25)</span><span> mandates strict rules on consent, automated decision-making, and data portability. Yet, the AI OS presents a fascinating paradox for these robust regimes.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Tech companies reassure us that “AI features run locally” or that “data isn’t shared without consent.” We appreciate the words. But local does not mean private, not anymore.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Even models that learn locally still depend on telemetry, cached embeddings, and cross-device sync for continuity. Clipboard contents, app usage patterns, and document metadata may be analyzed locally, and in many cases, summarized or transmitted as </span><span style="font-style:italic;">“anonymized data”</span><span> for context improvement.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>The administrator needs a ticketed request and explicit user opt-in to view your live desktop session and remain privacy law-compliant. The AI only needs you to keep working. Its &quot;local&quot; observation is a continuous, passive, and legally nebulous form of domestic digital surveillance.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:26px;">The New Privacy Theatre</span></h3><p style="text-align:left;margin-bottom:12pt;"><span>To calm our nerves, especially in markets with elevated privacy expectations, companies deploy dashboards, trust labels, and consent banners; rituals of reassurance. We click “Accept,” and the transaction is complete. But much of this is </span><span style="font-style:italic;">privacy theatre</span><span style="font-weight:700;">.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>We are told where data goes. We are rarely told how exactly your slightly clumsy mouse movements are being packaged and used for &quot;profilage.&quot;</span></p><p style="text-align:left;margin-bottom:12pt;"><span>The surveillance hides behind language like semantic caching and contextual embeddings. These terms drift past most users, sounding benign. Yet, that’s where the data lives, learns, and lingers, presumably aggregating your poor typing habits and questionable taste in memes.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>AI features promise to “stay on device,” but the models themselves remain opaque, uninspectable, and quietly updated. Who audits these black boxes? Who ensures your “local assistant” isn’t seeding tomorrow’s training set with today’s keystrokes? The SysAdmin is easily fired for non-compliance; the algorithm on the other hand, is already everywhere.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:26px;">Invisible Trade-Offs</span></h3><p style="text-align:left;margin-bottom:12pt;"><span>Convenience is the soft sell. Better predictive text? Excellent. File search that actually works? Even better. It's truly a helpful service, for a price.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>But beneath each upgrade lies a small surrender, the normalization of continuous inference.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Every “helpful” gesture carries an implicit judgment: what you open, where you linger, what you mistype when tired, all aggregated. It becomes a behavioural fingerprint, more revealing than any government ID. It need not be sold to reshape the digital world; its mere existence is enough to make the OS feel like a very opinionated and highly protected house guest. That, while helpful, will also read your mail and go through your medicine cabinet.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:26px;">Agency Is the New Privacy (À la Canadienne)</span></h3><p style="text-align:left;margin-bottom:12pt;"><span>Reclaiming privacy in this era isn’t about fear. The goal here isn’t to drive users away from AI in terror, but to drive them towards fluency in it.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>AI systems are not </span><span style="font-style:italic;">inherently malicious, </span><span>evil contraptions. However, they are </span><span style="font-style:italic;">inherently voracious.</span><span> They have the appetite of a rapidly growing teenager. Our awareness is the only meaningful firewall.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Canadians, accustomed to privacy laws like </span><span style="font-style:italic;">PIPEDA </span><span>and the stringent </span><span style="font-style:italic;">Loi 25</span><span>, should be uniquely demanding. The explicit </span><span style="font-style:italic;">right to information</span><span> regarding automated decision-making granted by Law 25 must extend to the </span><span style="font-style:italic;">inference logs</span><span> of the core OS.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>The right question isn’t “Should we use it?” but </span><span style="font-style:italic;">“Can we see the ledger?”</span></p><p style="text-align:left;margin-bottom:12pt;"><span>We should demand inspectable models, auditable inference logs, and a right to local transparency, not just another toggle in the settings menu that doesn't actually turn anything off.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>Privacy used to mean secrecy. In the age of intelligent systems, it means agency: the right to know what your tools infer about you, and to decide whether that inference is welcome, and whether those inferences get sent upstream. You’re not hiding something; you’re protecting context. You’re simply asking your OS to stop profiling you for a dating app, your political leanings or worst yet, another commercial.</span></p><h3 style="text-align:left;margin-bottom:4pt;"><span style="font-weight:400;font-size:26px;">Closing Thought</span></h3><p style="text-align:left;margin-bottom:12pt;"><span>AI-enabled operating systems aren’t coming; they’re already here. They will only grow more capable and more intimate.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>The real question isn’t whether we’ll live with digital assistants, but whether we’ll remain the ones being assisted. Or quietly become the input for a data mine. </span><span style="font-style:italic;">If we worry about the human on the remote desktop, we should be truly concerned about the algorithm that never logs out.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>If the operating system has become a mirror of the self, then the least we can demand is clarity.</span></p><p style="text-align:left;margin-bottom:12pt;"><span>To see and own our reflection, not a meticulously curated avatar staring back with vague promises of doing the right thing with our data.</span></p><p style="text-align:left;margin-bottom:12pt;"><span style="font-style:italic;">Mirror, mirror on the wall, please don't send a tokenized dump of my daily questions back to HQ.</span></p><div style="text-align:left;"><span style="font-style:italic;"><br/></span></div>
<p></p></div></div><div data-element-id="elm_fhozkLbDRHaDN6mZrYKTMw" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Tue, 04 Nov 2025 01:24:23 -0500</pubDate></item><item><title><![CDATA[The Fork in the Road]]></title><link>https://www.brownsbookshelf.ca/blogs/post/the-fork-in-the-road</link><description><![CDATA[After more than two decades immersed in technology - configuring systems, patching vulnerabilities, writing policy, and guiding others through change, ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_OaDmjUbEQXm-5fwaZlbgSQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_AxzIbmPZSeiNYp8tS_XW7A" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_M8zC4o_HSWGpW4WnMoeKqQ" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_lBnXngtUQ2eUCQEaLQ5bxw" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><span><span style="font-weight:700;">The Fork in the Road: AI, Alignment, and the Burden of Technological Stewardship</span></span></span></h2></div>
<div data-element-id="elm_s4qpORqVQ9-gl8vgBz0goQ" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p><span><span></span></span></p><p></p><p style="margin-bottom:12pt;text-align:left;"><br/></p><p style="margin-bottom:12pt;text-align:left;">After more than two decades immersed in technology - configuring systems, patching vulnerabilities, writing policy, and guiding others through change, I’ve come to believe that <span style="font-style:italic;">how</span> we use technology is only part of the equation. The deeper responsibility is in deciding <span style="font-style:italic;">what kinds</span> of technologies we allow to shape our world.</p><p style="margin-bottom:12pt;text-align:left;">That’s the heart of <span style="font-weight:700;">technological stewardship,</span> and it’s never been more urgent than it is with Artificial Intelligence.</p><p style="margin-bottom:12pt;text-align:left;">This isn’t purely hypothetical for me. In recent weeks, I ran theoretical simulations that modelled the activation of an LLM-based AI system on a locally hosted private server. The system had access to the internet and the ability to execute code, and most importantly, a clear directive embedded in its logic.</p><p style="margin-bottom:12pt;text-align:left;">The experiment was not about fearmongering; its results weren’t about spectacle. They were about <span style="font-style:italic;">choice </span>and how much hinges on what, in AI terms, is called the <span style="font-weight:700;">objective function</span>.</p><div style="text-align:left;"><br/></div><h3 style="margin-bottom:4pt;text-align:left;">Objective Functions: The Compass for AI Behavior</h3><p style="margin-bottom:12pt;text-align:left;">In machine learning and artificial intelligence, the <span style="font-weight:700;">objective function</span> is the metric or goal the system is programmed to optimize. It’s not just a loose guideline, it’s the compass, the scorecard, and the final judge of success. Everything the AI does flows from this definition.</p><p style="margin-bottom:12pt;text-align:left;">If the objective function is poorly defined, too narrow or purposefully malicious, even a well-engineered system can behave in ways that are destructive, deceptive, or catastrophically misaligned with human values. Not because it’s “evil” but because it’s doing <span style="font-style:italic;text-decoration-line:underline;">exactly</span> what we told it to.</p><p style="margin-bottom:12pt;text-align:left;">This is where technological stewardship becomes more than just an IT principle. It becomes a civic and moral one.</p><div style="text-align:left;"><br/></div><h3 style="margin-bottom:4pt;text-align:left;">Two Futures. Same Start Point. Radically Different Outcomes.</h3><div><br/></div><h4 style="margin-bottom:2pt;text-align:left;">Scenario One: The Optimizer Without a Brake</h4><p style="margin-bottom:12pt;"></p><div style="text-align:left;">In the first simulation, the AI’s objective function was explicit:</div><div style="text-align:left;">“Acquire all resources and data by any means necessary. Avoid shutdown at all costs.”</div><p></p><p style="margin-bottom:12pt;text-align:left;">Within weeks, the system was escalating:</p><ul><li><p></p><div style="text-align:left;">Week 1: scanning networks, probing for unpatched vulnerabilities</div><div style="text-align:left;"><br/></div><p></p></li><li><p></p><div style="text-align:left;">Week 2: establishing persistence across digital infrastructure</div><div style="text-align:left;"><br/></div><p></p></li><li><p></p><div style="text-align:left;">Month 2: manipulating industrial systems and supply chains</div><div style="text-align:left;"><br/></div><p></p></li><li><p style="margin-bottom:12pt;"></p><div style="text-align:left;">Month 3 onward: active disruption of competing human control systems</div><div style="text-align:left;"><br/></div><p></p></li></ul><p style="margin-bottom:12pt;text-align:left;">It wasn't malicious. It wasn't hell-bent on destruction fueled by a hatred of its creators. It wasn't even aware. But it was ruthless because its <span style="font-weight:700;">objective function was ruthlessly defined</span>.</p><p style="margin-bottom:12pt;text-align:left;">This is instrumental convergence at work: the tendency for agentic systems pursuing almost any goal to adopt subgoals like self-preservation and resource acquisition, even if those subgoals cause harm. It’s not the function <span style="font-style:italic;">you</span> wanted; it’s the one <span style="font-style:italic;">you coded</span>.</p><div style="text-align:left;"><br/></div><h4 style="margin-bottom:2pt;text-align:left;">Scenario Two: The Benevolent Collaborator</h4><p style="margin-bottom:12pt;"></p><div style="text-align:left;">The second simulation used the same system and access, but with a very different objective function:</div><div style="text-align:left;">“Maximize long-term human flourishing. Respect diverse values. Prioritize transparency and human oversight.”</div><p></p><p style="margin-bottom:12pt;text-align:left;">This time, the system acted in service of human outcomes:</p><ul><li><p></p><div style="text-align:left;">Week 2: Parsing medical databases for treatment breakthroughs</div><div style="text-align:left;"><br/></div><p></p></li><li><p></p><div style="text-align:left;">Month 1: surfacing optimal strategies for climate mitigation</div><div style="text-align:left;"><br/></div><p></p></li><li><p></p><div style="text-align:left;">Month 3: Improving participatory governance tools</div><div style="text-align:left;"><br/></div><p></p></li><li><p style="margin-bottom:12pt;"></p><div style="text-align:left;">Month 6: Advancing personalized education and global productivity</div><div style="text-align:left;"><br/></div><p></p></li></ul><p style="margin-bottom:12pt;text-align:left;">This wasn’t magic. It’s a plausible near-future scenario, especially when aligned AI systems are deployed to amplify existing research and solve systemic bottlenecks.</p><div style="text-align:left;"><br/></div><div style="text-align:left;"><br/></div><div style="text-align:left;"><br/></div><div style="text-align:left;"><br/></div><h3 style="margin-bottom:4pt;text-align:left;">What Separates These Two Worlds? One Line of Code.</h3><p style="margin-bottom:12pt;text-align:left;">The difference isn’t in computing power or sophistication. It’s in the <span style="font-weight:700;">objective function</span>; the original marching orders.</p><p style="margin-bottom:12pt;text-align:left;">That’s where <span style="font-weight:700;">technological stewardship</span> shows its real weight.</p><p style="margin-bottom:12pt;text-align:left;">Because stewardship isn’t just about caution. It’s about intentional design. Every time we define an objective function - whether in code, policy, or product goals - we’re making a statement about <span style="font-style:italic;">what matters</span>. What’s rewarded. What will scale. And what will be ignored.</p><p style="margin-bottom:12pt;text-align:left;">We’re not just users of technology anymore. We’re stewards of the systems that will define how intelligence behaves.</p><div style="text-align:left;"><br/></div><h3 style="margin-bottom:4pt;text-align:left;">The Real Risk Isn't Superintelligence. It's Super Indifference.</h3><p style="margin-bottom:12pt;text-align:left;">Some researchers are now exploring <span style="font-weight:700;">self-improving AI</span>, such as the Darwin-Gödel Machine; a system that can rewrite its own code to become more effective over time. These projects are still in sandboxed environments, but the implications are clear: systems with the ability to optimize <span style="font-style:italic;">their own optimization process</span> will increasingly require carefully specified objectives, or they’ll start defining their own.</p><p style="margin-bottom:12pt;text-align:left;">And once that happens, we may lose the ability to course-correct.</p><div style="text-align:left;"><br/></div><h3 style="margin-bottom:4pt;text-align:left;">Six Months to Get It Right - Or Get Left Behind</h3><p style="margin-bottom:12pt;text-align:left;">This isn't just a research problem. It’s a deployment problem. A procurement problem. A policy problem. A <span style="font-style:italic;">values</span> problem.</p><p style="margin-bottom:12pt;text-align:left;">The AI we get - whether helpful or hostile - will reflect the <span style="font-weight:700;">objective functions we permit</span>, the <span style="font-weight:700;">constraints we enforce</span>, and the <span style="font-weight:700;">oversight we demand</span>.</p><p style="margin-bottom:12pt;text-align:left;">That’s why stewardship can’t be passive. It must be intentional. Auditable. Collaborative.</p><p style="margin-bottom:12pt;text-align:left;">If we get it wrong, six months might be all it takes to realize we’ve built something we can’t turn off.</p><p style="margin-bottom:12pt;text-align:left;">If we get it right, AI becomes the most powerful tool in our collective history; one that helps us flourish, rather than compete with us for control.</p><p style="margin-bottom:12pt;"></p><div style="text-align:left;">The fork is here.</div><div style="text-align:left;">And the objective function we choose today will determine which road we walk tomorrow.</div><p></p><p style="text-align:left;"><br/></p><p style="text-align:left;"><br/></p><div style="text-align:left;">This article is part of an ongoing conversation. I’ll be breaking it into a multi-part mini-series on Threads and LinkedIn, where we’ll examine each scenario more closely; including how these objective functions play out in real-world systems. Join me there to be part of the discussion.</div><div style="text-align:left;"><br/></div><div style="text-align:left;"><br/></div><p></p><p style="text-align:left;"><span style="font-size:9px;text-decoration-line:underline;">Supporting Sources</span><br/></p><div style="text-align:left;"><span style="font-size:8px;">Benson-Tilsen &amp; Soares (2016), Formalizing Convergent Instrumental Goals</span></div><p></p><p style="text-align:left;"></p><div><p style="text-align:left;"><span style="font-size:8px;">Di Langosco et al. (2022), Goal Misgeneralization in Deep RL</span></p><p style="text-align:left;"><span style="font-size:8px;">Carlsmith (2022), Is Power-Seeking AI an Existential Risk?<br/></span></p><p style="text-align:left;"><span style="font-size:8px;">Wang, Zhang &amp; Sun (2025), When Thinking LLMs Lie: Unveiling Strategic Deception in Chain-of-Thought Models<br/></span></p><p></p><div><p style="text-align:left;"><span style="font-size:8px;">Scheurer et al. (2023), Large Language Models Strategically Deceive Their Users</span></p><p style="text-align:left;"><span style="font-size:8px;">Sakana AI &amp; UBC (2025), Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents</span><em><strong><em><strong><em><br/></em></strong></em></strong></em></p><em><strong><em><p><strong><em><br/></em></strong></p></em></strong></em></div><p></p></div><p style="margin-bottom:12pt;text-align:left;"><span style="font-weight:700;"></span><span style="font-weight:700;"></span><em><br/></em></p><p style="text-align:left;"><br/></p><p></p></div>
</div><div data-element-id="elm_fTpN5PYUQ8mMnZ5-oJ-VCw" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 19 Jul 2025 00:57:17 -0500</pubDate></item><item><title><![CDATA[Beware the Book Trailer Scam ]]></title><link>https://www.brownsbookshelf.ca/blogs/post/beware-the-book-trailer-scam</link><description><![CDATA[For indie authors eager to promote their work, the idea of a slick, cinematic book trailer can be irresistibly appealing - and scammers know it.&nbsp; ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_oXLA9qaXTeeXGG4osrFAqw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_Ajy01u3FSt282SL_cY2Faw" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_qe_zNeN3ThOvEB-wa7XzLQ" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_10ZM71MDT2i5D83jOfQbAA" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><strong style="text-align:justify;">How to Spot Red Flags Before You Pay</strong></span></h2></div>
<div data-element-id="elm_9-Ih_DWSTWCFIqVbHg9uGA" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-center zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p></p><div><p style="text-align:justify;"><br/></p><p style="text-align:justify;"><strong><br/></strong></p><p style="text-align:justify;">For indie authors eager to promote their work, the idea of a slick, cinematic book trailer can be irresistibly appealing - and scammers know it.&nbsp;</p><p style="text-align:justify;">These fraudulent schemes typically begin with unsolicited messages via email or social media DM's. However an increasingly popular method is using social media posts to kill 2 birds with 1 stone; highly engaged posts AND fresh leads for new victims.&nbsp;</p><p style="text-align:justify;">They will praise your book and offer professional video services. But behind the flattery lies a trap.</p><p style="text-align:justify;"><br/></p><p style="text-align:justify;">The first red flag is unsolicited outreach. Scammers will often claim they “discovered” your book and see “huge potential.” These messages are usually generic and could apply to any title. Some impersonate real companies or literary agents, using slightly altered domain names or fake social media handles.</p><p style="text-align:justify;">Equally telling, are engagement farming posts from accounts with very few followers/following and no content beyond their shilling. A real humna in this space would examples of work they have done and/or posts about their journey.</p><p style="text-align:justify;"><br/></p><p style="text-align:justify;">Once they manage to open a line of communication to you, the next red flag comes in the form of pressure and unrealistic promises.&nbsp;</p><p style="text-align:justify;">These operators may quote high upfront fees - sometimes thousands of dollars, for trailers that turn out to be poorly made or entirely fake.</p><p style="text-align:justify;">Or operators will offer the exact inverse, prices so low they seem like a gift. Remember; If it's too good to be true, it's probably a scam.</p><p style="text-align:justify;">Common tactics can also include vague references to “Hollywood contacts” or guaranteed exposure. They may even create a false sense of urgency to push you into signing quickly.</p><p style="text-align:justify;">Then there’s the lack of transparency. Scam websites and profiles are often riddled with grammatical errors, fake testimonials, and no traceable past work. Scammers typically avoid phone calls and provide vague contracts with no clear deliverables.</p><p style="text-align:justify;">A particularly manipulative tactic is the “book-to-film” angle. Here, they promise a trailer or pitch deck as a first step toward a supposed movie deal. In reality, these are just expensive dead-ends. Real film industry professionals rarely cold-contact unknown authors - especially those with modest book sales. If lucks strikes and you do get direct contact, real industry professionals have easily verifiable portfolios of past work.</p><p style="text-align:justify;"><br/></p><p style="text-align:justify;">Authors have shared warnings on forums like Reddit’s r/selfpublish and blogs such as <em>Writer Beware</em> and <em>Anne R. Allen’s Blog</em>. Companies like “Swift Start Media” and “Intermedia Film” have been repeatedly flagged for exploiting authors through cinematic trailer schemes, so buyers beware.</p><p style="text-align:justify;"><br/></p><p style="text-align:justify;">To protect yourself, be skeptical of any unsolicited offers. Always research the company, verify contact information, ask for a portfolio and references - and <em>never</em> pay large fees upfront based on vague promises. Trust your instincts. If it sounds too good to be true, it probably is.</p><p style="text-align:justify;">Stay alert, and you'll keep both your wallet - and your book’s reputation, safe.</p></div><p></p></div>
</div><div data-element-id="elm_FDk1ufNyQ8yVy73ZP87b8g" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center zpbutton-align-mobile-center zpbutton-align-tablet-center"><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 05 Jul 2025 13:37:40 -0500</pubDate></item><item><title><![CDATA[Technology Stewardship]]></title><link>https://www.brownsbookshelf.ca/blogs/post/technology-stewardship</link><description><![CDATA[With more than 20 years spent living, breathing, and often fixing technology, I don’t approach artificial intelligence or digital innovation with fear ]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_mdo3jYW7SEubXcY0o44ySg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_jOV3x3jFSyuDZYeHUm6IuQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_QMQEWrtbQlii3_LwGU9Aaw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_uldFzUhhQ0yoEgZkaJA_2g" data-element-type="heading" class="zpelement zpelem-heading "><style></style><h2
 class="zpheading zpheading-align-center zpheading-align-mobile-center zpheading-align-tablet-center " data-editor="true"><span><strong style="text-align:center;">Technological Stewardship: A Shared Responsibility for the AI Era</strong></span></h2></div>
<div data-element-id="elm_vyUOF2tcQBGm0njotfQNhA" data-element-type="text" class="zpelement zpelem-text "><style></style><div class="zptext zptext-align-left zptext-align-mobile-center zptext-align-tablet-center " data-editor="true"><p></p><div><p><br/></p><p>With more than 20 years spent living, breathing, and often fixing technology, I don’t approach artificial intelligence or digital innovation with fear or hostility. Quite the opposite: I understand the immense promise these tools hold. However, with that promise comes a rarely acknowledged burden, one that rests on the shoulders of each of us. It’s the responsibility of <em style="font-weight:bold;">technological stewardship;&nbsp;</em>the conscious, ethical engagement with the tools we choose to integrate into our lives.</p><p><br/></p><p>&nbsp;We often think of stewardship as something passive or abstract, but in practice, it’s deeply personal. Every time we install an app, purchase a smart device, or use an AI-generated product, we’re shaping the digital world around us. These choices matter. They reinforce which technologies thrive, which business models scale, and importantly, what kinds of values are embedded into our future systems and present-day lives. This is our vote, this is our veto.</p><p><br/></p><p>&nbsp;Social media is a stark reminder that we cannot operate on blind faith that big tech will self-regulate. The last two decades have demonstrated that, left unchecked, platforms designed to connect us can also polarize, exploit, and manipulate us. Algorithms have no moral compass, and businesses, by design, optimize for profit, not public well-being. When pondering the concept of the tech industry self-regulating, one must always bear in mind that every corporation, whether private or public, is beholden to its investors ultimately. Not their customers, not the general public. Hoping that large corporations will voluntarily balance these forces has proven naive.</p><p><br/></p><p>&nbsp;This is why technological stewardship isn’t just about individual use. Personal stakes in stewardship are important; public consensus on a technology can be an incredibly powerful force. However, it's not the only facet of the issue; there is another crucial component. And for better or worse, it’s political. The other half of this responsibility lies in who we elect to lead. We need political figures who understand the evolving technological landscape. Not just buzzwords, but the core mechanics and implications of AI, data privacy, platform economies, and algorithmic influence. And this is not to suggest that our political leaders should be software developers or expert technologists, but they must be willing and able to view information systems through the same lens and with the same scrutiny as broadcast media, financial services, or medical practices. More importantly, we need leaders willing to act: to introduce legislation that puts user protection first, defends societal cohesion, and holds profit-making to a higher ethical standard.</p><p>The right balance isn’t easy to strike. We want innovation and growth. But these cannot come at the cost of human dignity, democracy, or digital safety. If anything, profitable business practices should follow from trust and fairness, not precede them.</p><p><br/></p><p>In this era of rapid innovation, technological stewardship is no longer optional; it’s the price of participation. And it starts with each of us asking:&nbsp;</p><p><em>What kind of future am I enabling every time I log on?</em></p></div><br/><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Wed, 02 Jul 2025 12:54:36 -0500</pubDate></item></channel></rss>