Buzzwords of the Day 02-21-2024
Buzzwords of the Day 02-21-2024
#This Week's Buzzwords:
declare -a Buzzword = { "Exploit", "AI/LLM" }
$Buzzword{0}= "Exploit" - When everything is a service, what do you actually control?
The corporate software landscape is a dramatically different place than it was even 10 years ago. In droves, businesses are transitioning themselves away from proprietary or locally-hosted software (whether by their choice or by necessity) towards a Software-as-a-Service (SaaS) model...including hackers. As quickly as enterprise software has evolved, so too has malware, and publishers have discovered a lucrative business model in offering Malware-as-a-Service, with reports of increases in attacks using these tools published weekly.
Threat actors paying for access to curated/maintained malware is, however, only one side of this coin. Security research firm Wing Security published a report earlier this month laying out just how much of an increased surface area the average business contends with in the age of their adoption of SaaS solutions through their own internal stack. Each individual "subscription" the business employs adds an inlet & outlet of data from that company to the 'web'. Encrypted or not, this line of communication is necessary for the service to function, which means accepting the risk of increased surface area. Wing's report (linked below) dives into the specifics of some of these expanded vectors, including shadow services (malicious software masquerading as something legitimate), poor configuration such as single-factor login, and more.
These attack vectors are not limited to businesses or corporate applications, either. Wyze, maker of cloud-accessible home cameras has now inadvertently exposed users' camera feeds to strangers twice in the past 30 days. Per notices sent to active users, they explain the current issue is a result of a misconfiguration on an AWS service they use to provide their camera hosting. For a translation, the home security Software-as-a-Service experienced an issue caused by their use of Amazon's Software-as-a-Service, which exposed users' private camera feeds to unauthorized users. Similarly to conversations of late regarding licensing of media vs. owernship of media, there seems to be a growing concern of privacy among software being run locally or as-a-service.
Source Context:
https://thehackernews.com/2024/02/how-nation-state-actors-target-your.html (SaaS increases Attack Vectors)
https://wing.security/resources/report/the-2024-state-of-saas-security/ (Wing Security's report in full)
https://arstechnica.com/gadgets/2024/02/wyze-cameras-gave-13000-people-unauthorized-views-of-strangers-homes/ (Wyze's most recent incident)
~~~
$Buzzword{1}= "AI/LLM" - All aboard! Nex stop, Uncanny Valley
The whole of the internet likely remembers seeing last year's very terrifying AI-generated video of Will Smith "eating spaghetti". If you have not, and would prefer to not look it up, the result was unrealistic, "fluid", and more akin to a second-hand description of an acid trip than a believable video. In the background since then, OpenAI has been working on refining their own generative video model, and the speed of their progress should ring alarm bells. Loud ones. Sora, the name of their model, is currently capable of generating high-fidelity video (*both artistic & photo-realistic) up to 1 minute in length from a text input. The model, as OpenAI explain in their documentation, generates a rough composite of the user's input, and re-calculates results until a "desirable" end product is reached. They show an example in their documentation showing the improvements made as the model iterates: the first sample is eerily abstract and difficult to discern, the second looking like a heavily-filtered compressed video file, and the fourth looking like a high-fidelity realistic image. The model, like many generative models, struggles with some of the nuances of "reality" (gravity & physics seem to be an Achilles' heel for now), the quality of the subject and contrast to the video background is eerily strong.
Some features this outlet noticed are how well the model handles object permanence for background images, particularly background elements that are temporarily obscured by movement. Other generative video models have struggled with retaining this information, implying there is some sort of buffer or reference that Sora uses to "remember" what objects it has placed in the composite (frame) of the video. The videos rendered by the model have convincing depth, which Sora accomplishes by stacking "frames" into layers, which it presents as a stacked composite for output, similarly to how a digital artist creates layered images using software like Photoshop, Illustrator, or GIMP. One implementation on their documentation shows the model re-mapping part of an existing video (akin to CGI background removal/editing) with startling accuracy.
For now, the model is not available for public or private use, and no release date or publication information is available at the time of publication. As OpenAI continues to iterate, improve, and train this model, these 'tells' will fade and become harder to spot, and this technology will likely find its way into all sorts of industry. It certainly has the power to break down barriers and propel video creation forward by leaps and bounds. The question remains, once this model releases to the public, is how can we best prepare for the inevitable weaponization of these tools for harm, as much as for good.
Source Context:
https://fossbytes.com/what-is-openais-sora-video-generator-ai-how-does-it-work/ (high-level summary of Sora AI)
https://openai.com/research/video-generation-models-as-world-simulators (OpenAI's first-party documentation & examples)