Goldman Sachs staff express AI, OneGS and retention concerns amid hiring slowdown
Mid‑February Blind threads show Goldman Sachs staff flagging hiring slowdowns, the OneGS/AI rollout and retention worries amid growing shadow‑AI use.

“Over the last several days (mid‑February 2026) public threads and verified posts on Blind — the anonymous, verified‑employee platform widely used by finance and tech professionals — show a marked uptick in internal discussion about hiring slowdowns, OneGS/AI initiative,” the reporting found, with employees naming hiring slowdowns, the OneGS/AI initiative and retention as the central concerns.
That uptick surfaced alongside a wider conversation captured in a Hibob analysis that pulled employee comments from public forums. Hibob highlighted posts with heavy engagement, including one item with “(400+ upvotes, 250+ comments),” another with “(2,831+ upvotes, 2,329+ comments)” and a third with “(1,272+ upvotes, 683+ comments),” and summarized prevailing anxieties: “Employees worry about AI errors, especially in fields where accuracy is critical. They’re concerned about being held responsible for AI mistakes they didn’t catch, and they fear AI errors damaging their professional reputation.” Hibob also noted that “Employees aren’t anti-AI. They’re asking for something much more reasonable—thoughtful implementation that considers their needs and concerns.”
Security and prevalence statistics cited in parallel reporting raise the stakes for Goldman Sachs as it pilots OneGS and AI tools. Itpro cited MIT’s Project NANDA State of AI in Business 2025 report finding “workers at over 90% of companies using chatbots to perform automated tasks versus just 40% of companies recording LLM subscriptions.” Itpro also quoted Anagram security training findings that “58% of employees admitted to posting sensitive data into AI tools, including client records, financial data, and internal documents” and that “As many as 40% also claimed they would knowingly violate company policies to finish a task quicker.” Harley Sugarman, founder & CEO at Anagram, wrote: “Employees are willing to trade compliance for convenience.” Microsoft research cited by Itpro reported regionally that “71% of surveyed employees in the region admitting to using unapproved AI tools at work” and that “Over a fifth (22%) revealed they use shadow AI tools for financial tasks.”
Those usage patterns carry compliance and legal risk. Itpro quoted GDPR‑style penalty language warning that “major infringements like processing data for unlawful purposes can cost companies upwards of €20,000,000 or 4% of the organization’s worldwide revenue in the previous year: whichever is higher.” The report also noted past corporate reactions, saying early incidents have driven blanket bans and pointing to Samsung’s 2023 decision to forbid employees from using ChatGPT after proprietary code was uploaded to the platform.
Hibob’s guidance for HR and leadership echoed the specific anxieties raised on Blind: “Implement strong quality control processes for AI-generated work. Train your people to effectively review and verify AI outputs. Create clear protocols for handling AI errors and ensure your people aren’t unfairly blamed for system failures.” It also urged firms to “Be honest about which roles might change and invest in retraining programs early. Help your people see how they can work alongside AI rather than be replaced by it. Create new career paths that leverage human creativity and judgment alongside AI capabilities.” The Blind signal is an immediate warning for Goldman Sachs managers: staff are publicly debating hiring pace, OneGS and retention even as external studies show shadow‑AI use and data exposure remain widespread. The excerpts provided do not include direct statements from Goldman Sachs on the Blind threads or OneGS rollout.
Know something we missed? Have a correction or additional information?
Submit a Tip
