Buzzwords of the Day 12-14-2023
*Special Note from the Author - This week's issue was written entirely in Vim!
#This Week's Buzzwords:
declare -a Buzzword = { "Recall", "Licensing", "AI/LLM" }
$Buzzword{0}= "Recall" - Self-driving cars aren't ready for the world yet.
Tesla shocked the world this week, announcing a recall of every single unit capable of their "Autopilot" self-driving mode. It turns out, despite legally-required safety warnings on a splash screen that are easily dismissed, drivers somehow managed to so routinely misuse the system that Tesla feels the best path forward is to remove these units from public streets. The software "warns" drivers upon launch that their Autosteer feature is meant to assist the driver, not replace them entirely. Conveniently, Youtube, Twitter (sorry, "X"), Instagram, and TikTok are full of video evidence of Tesla owners/drivers "limit testing" the Autosteer feature.
Ars Technica reports that despite Tesla's public reminders that Autosteer is designed specifically for major roads/highways, the system clearly functions on smaller streets, and even when used as intended, can fail to identify important data like lane dividers (claims backed up by official statements by the National Traffic Safety Association). Ultimately, it appears the data from what amount to paid beta testing overwhelmingly shows that either "smart" driving assistance powered by current AI technology isn't ready for real-world use, or that the real world is not ready to leverage this technology responsibly en masse. In either case, the future of automated driving remains uncertain, and this outlet remains hopeful it can can roll out safely sometime in the future.
Source Context:
https://techcrunch.com/2023/12/13/tesla-to-restrict-autopilots-best-feature-following-recall/
~~~
$Buzzword{1}= "Licensing" - You'll own nothing and like it.
There was a time when purchasing a software license meant you owned the license to use that software. Companies would support their products as advertised and users who purchased those products only paid for them (usually a meaningful sum) up front, and only once. If a new, substantially better or more efficient edition of that software released later in time, those users could choose to continue using their copy (which may suit their needs, and that they have already purchased), or choose to pay additional money for an improved version of that software.
That model terrifies software publishers in 2023. In the ever-growing trend of businesses declaring end-of-life for their "lifetime" licenses, popular virtualization provider VMware announced this week that they plan to transition their whole user base to their subscription-based model in the coming years. Executives and public spokespeople for the company have assured users of continued support up until the end of their service terms. They also announced a plan to strongly incentivize these users (who have purchased lifetime access to this software for a meaningful sum of money) to transition their licenses over to their pay-as-you-go perpetual subscription model.
VMware claims this move is primarily driven by the industry's apparent comfort with susbcription-based models, but this outlet asks a follow-up:
> Is the demand-side of the market "comfortable" with this model, or is the supply-side of this market overwhelmingly deciding they can dictate the future?
Source Context:
~~~
$Buzzword{2}= "AI/LLM" - Once again, AI for good and evil.
This week has more ups and downs for the future of AI-integration to everyday life. Starting with the upsides, Intel revealed recently that their upcoming AI compute cores are being designed with the help of AI! Ars Technica covered Intel's process in full (link below), but some of the highlights showcase how, with the proper implementation, AI-powered intelligent analysis can be a force multiplier for complex development/engineering tasks like transistor planning and performance analysis. Intel credits several improvements to Meteor Lake chips to the AI-powered search for the most consistent "sweet spot" across the production line.
On the other side of the ethical spectrum, a San Fransisco-based AI startup is developing an AI-powered analysis model for banks to leverage to "personalize" client experiences. Currently, they are partnered with several Brazilian banks to test their model, which they claim analyzes client account/transaction information to determine the most unique & personal metadata. Outlet Tech Crunch breaks down their process (full context linked below), which outlines a plan to allow banks the capacity to develop indivdiualized profiles for each of their clients. While this data will likely be used to bolster fraud detection algorithms (which already often leverage some machine learning to determine likely patterns of spending), it is increasingly more likely that this data will be used to more effectively advertise bank & bank-partner products based on highly personal insights. Hyperlane argues that the first-party data a bank can collect on a consumer is more tangibly valuable and useful to businesses than data acquired by any other means. Per Tech Crunch, the models in test-deployment right now are built for either:
- Building out audience segments (read: customer acquisition)
- Creating "lookalike" audience segment suggestions (read: customer acquisition, but with different wording)
If combined with additional generative models, this type of software opens up a rabbit hole where your bank sells their insights on your transaction history to advertisers, who can leverage generative models to tailor advertising very specifically to you. Not "you", but YOU. In an age where average legislators cannot differentiate between "higher artificial intelligence" and a "strong LLM-based chatbot", one can only hope the financial sector chooses a path of moral high ground. Maybe.
Source Context:
https://www.cnet.com/tech/computing/ai-helps-chipmakers-design-the-very-processors-that-speed-up-ai/