Earlier this week, Grammarly CISO Suha Can took the stage at the Gartner IT Symposium/Xpo™ to share his framework for tackling a daunting enterprise IT challenge: how to safely and effectively adopt generative AI. Detailing the rigorous process he led at Grammarly, Suha revealed the framework he designed to help other IT leaders reap the rewards of generative AI while minimizing the risks to their organizations.
The Promise and Threat of Generative AI
In the fall of 2022, Suha left Amazon to join Grammarly as its new CISO and noticed an interesting trend developing. As Suha recalls, “Within my first few weeks, I started receiving a flood of emails, Slack messages, and questions in meetings about generative AI and how it would be used at Grammarly.” Generative AI was, all of a sudden, everywhere.
Suha understood that in order to lead Grammarly to a new frontier, he and his team would have to figure out how to integrate responsible and effective generative AI into Grammarly—both in the product and for internal processes as well.
This journey would reveal in-depth learnings and insights that are now laid out in a framework outlining the selection, closing, and implementation of safe generative AI for enterprise use. For IT leaders considering whether generative AI adoption is necessary for their organization, Suha warns that the idea of choice here is an illusion. Chances are very high that your employees are already using generative AI whether you know it or not. Research shows:
Generative AI has the potential to fundamentally change the way organizations operate, interact with employees and customers, and drive business growth. Teams leveraging generative AI can improve the quality, speed, and accuracy of communication.
Rather than companies attempting to block the use of AI, Suha recommends following a strategic approach for adopting generative AI safely, based on his own recent experience at Grammarly.
The Framework for Safe Generative AI Adoption
Grammarly has over 14 years of experience developing and providing secure, private, and responsible AI. But earlier this year, we launched generative AI within our core product for the first time. That process involved building and rolling out our own generative AI features, which required evaluating and selecting our large language model (LLM) provider and combining it with our own proprietary AI and ML models.
Using this firsthand knowledge, Suha and the Grammarly team developed a framework that other tech leaders can use to safely select and deploy AI tools to harness the benefits of generative AI while minimizing its risks.
The framework is built around three key steps: selection, close, and implementation.
Selection
During this stage, tech leaders determine which generative AI products or partners are best suited for their business by evaluating specific criteria:
- Privacy by design
- Enterprise-grade security
- User controls
- Safety and fairness
- Legal and regulatory
Close
Leaders should ensure generative AI vendor contracts cover key provisions, such as:
- Content moderation and abuse monitoring
- Service cost, speed, and capacity
- Data processing and retention
Implementation
Once the right generative AI solutions are chosen, leaders need to chart a path for safe adoption and development across their organization, including:
- Usage policies
- Development policies
- Proactive security processes
With a clear, consistent framework guiding their adoption strategies, Suha believes tech leaders can get to a place where generative AI gives them peace of mind—not nightmares.
Gain an Edge With a Responsible Approach to Generative AI
Generative AI presents an enormous opportunity for IT leaders to deliver a competitive advantage to their organizations.
Minimize risk while driving significant business impact. Obtain the in-depth details you need to adopt generative AI by downloading the full framework and an example of an Acceptable Use Policy inspired by Grammarly’s.