Navigating AI-Driven Innovation in Regulated Industries

 In Whitepages

The full title of this article could have been None Shall Pass Until Thorough Software, Security, and Regulatory Compliance Reviews Ensure Conformance to Industry and Government Standards, but that doesn’t have the same ring to it.

The democratization of Artificial Intelligence has opened opportunities for a broader range of employees to jump into software development. Even those with little technical expertise can solve business problems using no-code platforms, Large Language Models (LLMs), and AI code assistants. This surge in creativity is fantastic for innovation, but it also brings significant risks, especially in regulated industries where the stakes are higher.

The ease with which non-technical users can now create and deploy systems poses a unique challenge: how to allow creativity and problem-solving without crossing the boundaries of regulatory compliance and security.

Artificial Intelligence in Medical Device

AI Empowerment in Regulated Industries

In regulated industries, the consequences of mistakes can be severe. Take healthcare, for example—if a system is poorly developed, it can lead to regulatory violations, hefty fines, compromised patient safety, or, in the worst cases, loss of life. Software as a Medical Device (SaMD) is a prime example of where AI-driven development must be carefully controlled. SaMD solutions are subject to strict regulatory requirements to ensure safety and effectiveness. If someone without the right expertise uses AI to develop a SaMD application, they might unknowingly create a product that doesn’t meet safety standards or regulatory guidelines. This could lead to serious problems like patient harm, product recalls, or even legal trouble for the company.

None Shall Pass: AI-Enabled SAmd

The temptation to move fast with AI must be tempered with the reality that in regulated industries, “None Shall Pass” without strict adherence to the rules. The FDA recognizes this challenge and has taken steps to address the responsible use of AI in medical products. In March, they released the paper Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together, signaling a commitment to “advance the responsible use of AI in medical products” and to “cultivate a patient-centered regulatory approach that emphasizes collaboration and health equity”. Despite these efforts however, comprehensive guidance for AI use in SaMD might still be in the horizon. Companies need to establish their own governance and change control practices to protect both them and their users.

This doesn’t mean that you must ban AI completely. AI is just another tool in the toolbox, albeit a powerful one, and you want your brightest minds to use it. What you need is to make sure they’re using it properly. It is a fine balance to walk to allow innovation while managing risk.

Here are some strategies that could help you:

1. Enable AI to Define the Problem, Not the Solution

Let’s talk about communication. Is there anything harder for people? Anyone who’s tried to create a common language between business and technical teams knows the struggle. We’re just not that great at it—it’s not something that’s taught in schools, and many engineers don’t see it as important (spoiler alert: it is!).

Sitting in a meeting with domain experts isn’t always efficient. They often struggle to articulate what they need, while software engineers demand concrete answers. We walk out of meetings thinking we’re on the same page, only to find out later that we’re not. Could we leverage AI to help bridge this gap? A good strategy is to train domain experts in AI and no-code tools, letting them come up with quick solutions on their own. Will these solutions scale? Probably not. Will they be easy to modify or extend? Unlikely. Will they meet your security protocols? Doubtful. But what they build will clearly show how they’d approach the problem and highlight the most important aspects they need to solve. Whatever they create will be invaluable input for the software team. 

2. Review Boards with Fast Feedback Loops

There’s definitely a place for architecture and security review boards, especially as your company scales and your engineering team grows. These boards can ensure that any application using AI technologies conforms to both company and regulatory guidelines. But beware—these boards can do more harm than good if not managed carefully. The architects on these boards should be pros at riding Gregor Hohpe’s Architect Elevator. They need to spend as much time down in the engine room with the team as they do in the penthouse, otherwise you risk running into more communication problems.

And then there’s the issue of feedback speed. Good teams have one thing in common: they move at a fast, sustainable pace. They know the best way to build software is to iterate quickly and get feedback from users. If your review boards meet once a month or take too long to provide feedback, your team will grow frustrated, innovation will stall, and eventually, you’ll lose your best people.

3. Collaboration, Collaboration, Collaboration

To minimize the risk of non-compliant software systems and avoid costly last-minute changes, it’s crucial to bring together domain experts, software engineers, security, and regulatory professionals from the very start. By fostering a collaborative environment where multidisciplinary teams work together early in the development process, you create a solid foundation for success.

Encouraging practices like ensemble programming not only helps to catch potential issues before they become problems but also build a culture of shared ownership and continuous improvement, reducing the likelihood of unexpected challenges later.

4.  Leverage Automation

A robust Continuous Integration workflow is essential. Contrary to popular belief, CI/CD isn’t just about using GitLab, CodePipeline or similar tools. You should start with the process and culture first. However, there’s no reason not to leverage the myriad of technological solutions available. Companies like Snyk and Synopsys offer AI code analysis tools that can help secure your team’s AI-generated code.

5. Experiment While Building Supporting Systems

AI doesn’t have to be limited to your core domain. In fact, it can be a good idea to start by applying these new technologies to supporting systems. This allows your team to experiment, learn where the boundaries are, and establish processes for safe use. For example, AI can be leveraged to help HR or Sales teams with repetitive tasks or to automate business reports. Once you’ve gained confidence, you can then explore more complex applications in your core domain with a clearer understanding of the potential risks and rewards.

6. Use AI to Flex Your Team’s Regulatory Muscles

AI can also play a valuable role in helping teams navigate regulatory landscapes and compliance documents. While a strong Quality Management System (QMS) is essential in any highly regulated environment, these systems often contain hundreds of procedures and work instructions, making them challenging to navigate, especially for new employees. Additionally, there are standards and guidance documents from government and international organizations that add to the complexity.

By inputting these documents into a Generative AI model, your team can create a Q&A system that simplifies navigating regulatory documentation for employees. AI tools like Amazon Q and Microsoft’s Copilot make this process more accessible than ever.

As we chart the exciting yet challenging landscape of AI-driven innovation in regulated industries, it’s clear that technology is a powerful ally. AI can help us solve complex problems, streamline processes, and push the boundaries of what’s possible. But amidst all this excitement, let’s not forget the most crucial element of all: the human touch.

AI may be able to analyze data, automate tasks, and even generate insights, but it’s our human judgment and oversight that ensure these technologies are used responsibly and ethically. In regulated industries, where the stakes are high and the consequences of errors can be severe, having a team of skilled professionals overseeing and guiding AI applications is not just important—it’s indispensable.

Partner With Nextern

At Nextern, we understand the unique challenges and opportunities of AI-driven innovation in the regulated medical device industry. Our multidisciplinary team of highly skilled professionals brings years of experience and deep expertise to every project. From navigating complex regulatory landscapes to ensuring robust compliance and safety, we are dedicated to helping you leverage AI responsibly and effectively.

With our extensive knowledge and proven track record in the medical device space, we’re here to guide you through every step of your journey. Whether you’re developing new solutions or enhancing existing systems, our experts are ready to partner with you to achieve excellence and drive success.

Follow us on linkedin!

Stay in Touch

Sign up to receive updates and information about Nextern!

Hidden

Newsletter

Email(Required)
Recommended Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt
Ingress Protection in Medical Devices at NexternMiniaturization