Technology

Former Palantir CISO Dane Stuckey joins OpenAI to steer safety | TechCrunch

OpenAI logo with spiraling pastel colors (Image Credits: Bryce Durbin / TechCrunch)


Dane Stuckey, the previous CISO of analytics agency Palantir, has joined OpenAI as its latest CISO, serving alongside OpenAI head of safety Matt Knight.

Stuckey introduced the transfer in a put up on X Tuesday night.

“Safety is germane to OpenAI’s mission,” he stated. “It’s essential we meet the best requirements for compliance, belief, and safety to guard a whole bunch of thousands and thousands of customers of our merchandise, allow democratic establishments to maximally profit from these applied sciences, and drive the event of protected AGI for the world.”

Stuckey began at Palantir in 2014 on the data safety workforce as a detection engineering and incident response lead. Previous to becoming a member of Palantir, Stuckey spent over a decade in varied business, authorities, and intelligence group digital forensics, incident detection/response, and safety program growth roles, in accordance to his weblog.

Stuckey’s work at Palantir, an AI firm wealthy in authorities contracts, may maybe assist advance OpenAI’s ambitions on this space. Forbes stories that, via its companion Carahsoft, a authorities contractor, OpenAI is searching for to determine a better relationship with the U.S. Division of Protection.

Because it lifted its ban on promoting AI tech to the navy in January, OpenAI has labored with the Pentagon on a number of software program tasks, together with ones associated to cybersecurity. It’s additionally appointed former head of the Nationwide Safety Company, retired Gen. Paul Nakasone, as a board member.

OpenAI has been beefing up the safety aspect of its operation in current months.

A number of weeks in the past, the corporate posted a job itemizing for a head of trusted compute and cryptography to steer a brand new workforce centered on constructing “safe AI infrastructure.” This infrastructure would entail capabilities to guard AI tech, safety software evaluations, and entry controls “that advance AI safety,” per the outline.