A Russia-linked disinformation campaign known as Doppelgänger is employing advanced obfuscation techniques and likely deploying AI to generate content, say security researchers. Doppelgänger has been called Russia's "most aggressively persistent covert influence operation" since 2017.
The U.S. Cybersecurity and Infrastructure Security Agency is urging software developers to implement memory safe coding as part of an effort to address critical vulnerabilities in programming languages and further shift security responsibilities away from end users.
By the numbers, who has implemented GenAI in their organization? Who has a dedicated budget? And who understands the AI regulations for their industry? An expert panel discusses the findings of ISMG's First Annual Generative AI Study: Business Rewards vs. Security Risks.
The U.S. Department of Health and Human Services on Wednesday released a sweeping strategy document proposing how the Biden administration intends to push the healthcare sector - through new requirements, incentives and enforcement - into improving the state of its cybersecurity.
Researchers from Jamf Threat Labs said they have managed to manipulate the code in a compromised iPhone to effectively make it appear as if the device is entering Lockdown Mode - but "without any of the protections that would normally be implemented by the service."
"How do we surprise our adversaries?" So asked Ollie Whitehouse, CTO of Britain's National Cyber Security Center, in a keynote speech at Black Hat Europe in London in which he urged defenders to focus on resilience and on finding fresh ways to impose material costs on adversaries.
A New York medical imaging services provider is notifying nearly 606,000 individuals that their information was potentially accessed and copied in a recent hacking incident. The entity is one of several medical imaging centers that have reported major hacking breaches in recent weeks and months.
A small group of researchers says it has identified an automated method for jailbreaking OpenAI, Meta and Google large language models with no obvious fix. Just like the algorithms that researchers can force into giving dangerous or undesirable responses, the technique depends on machine learning.
Enterprises have struggled to strike a balance between speed and security and stability, said Sean D. Mack, author, speaker and former CIO and CISO at Wiley. DevSecOps is the superpower that resolves this long-standing conflict and allows organizations to deliver software faster and more securely.
Security experts testified to Congress that the National Institute of Standards and Technology is better placed than the Transportation Security Administration to lead national implementation efforts for security-enhanced identification cards ahead of a looming 2025 deadline for national compliance.
A Russian military hacking intelligence group is winning the race to exploit known vulnerabilities before system administrators can apply patches, warns Proofpoint. The firm has seen a spike in activity from TA422, also known as APT28, Fancy Bear and Forest Blizzard.
Seoul police have accused the North Korean hacker group Andariel of stealing sensitive defense secrets from South Korean defense companies and laundering ransomware proceeds back to North Korea. The hackers stole 1.2TB of data, including information on advanced anti-aircraft weapons.
A recent spike in ransomware attacks has prompted federal regulators and the American Hospital Association to issue urgent warnings to hospitals and other healthcare firms to prevent potential exploitation of the Citrix Bleed software flaw affecting some NetScaler ADC and NetScaler Gateway devices.
Genetics testing firm 23andMe says hackers, in a credential-stuffing attack this fall, siphoned the ancestry data of 6.9 million individuals. 23andMe disclosed the attack on Oct. 1, stating the attackers had scraped the profiles of 23andMe users who opted in to the company's DNA Relatives feature.
Security researchers could access and modify an artificial intelligence code generation model developed by Facebook after scanning for API access tokens on AI developer platform Hugging Face and code repository GitHub. Tampering with training data is among the top threats to large language models.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.co.uk, you agree to our use of cookies.