Blog - David Orolin | Senior Programmer
Using AI to create explicit content, unfortunately, isn't anything new. It's nice that all it takes for all major publications to take note of this is when the same thing happens to the Person of the Year. Taylor Swift's face has appeared on a series of explicit AI generated images posted by an unnamed group of Telegram platform.
Billionaire Elon Musk announced that his company Neuralink implanted their first chip into a human patient's brain. This puts them among global pioneers like New York's Synchron when it comes to developing technologies enabling people with total paralysis to control smart devices with their thoughts.
Almost all printer manufacturers dream of jail time for anyone who as much as looks in the general direction of a third-party ink cartridge. HP has managed to come up with the second-best "solution": They claim that third party cartridges can infect your printer and consequently your network with viruses! Although such an attack is theoretically possible, HP's argument has not convinced the public or security experts.
Coincidentally, it soon emerged that HP had become a target of hacker attacks. I bet they got invaded by third-party cartridges.
In the previous newsletter, I mentioned that the chat tool ChatGPT has become somewhat lazy. OpenAI takes lazy robots seriously, so the first update has been released, addressing this issue. And a host of other ones.
Google is adding the Bard AI assistant to its messaging application. Typically, I would spin this news as positive, but as my research for the article on AI and intellectual property revealed, Google Bard has appalling terms regarding user data processing. And Forbes magazine unfortunately reached a similar conclusion. According to available sources, Google Bard will have the ability to learn from all your conversations – new and existing ones.
And to round up the security news, it turns out that even AI tools running offline in a local environment can potentially be dangerous. A study has been published examining models trained to intentionally and unpredictably feed users with dangerous information. An example? Your new programming copilot will work flawlessly throughout 2023, only to start deliberately generating code with security vulnerabilities in 2024.The Early Access Program for the upcoming version 2024.1 of Rider (and other JetBrains tools) has been launched.
GitHub Copilot received an update allowing, among other things, to expand the context it has access to during code generation. GitHub also expands the range of available commands in this update.