Jul 31, 2025
Written by Tom Vaknin, Chief Information Security Officer and Head of IT, Viz.ai
In healthcare, trust is everything—and as AI becomes more powerful, earning that trust gets harder. Artificial intelligence has been reshaping healthcare for years—but 2023–2024 marked a tipping point. Generative AI went mainstream, large language models entered daily workflows, and AI is now embedded in everything from radiology consoles to help-desk chatbots. Viz.ai has always been on the cutting edge of AI, but as CISO & Head of IT, my main job is to keep that edge safe and sound.
Responsible AI Is Bigger Than Models
Responsible AI weaves ethics, governance, and compliance into every phase—from design through deployment and beyond. It means asking not just “Can it work?” but “Should it, and under what guardrails?” In practice, we assess societal impact, patient-first values, and legal requirements, then bake in transparency, accountability, and risk checks so AI delivers benefits safely.
We use AI for productivity and coding, our vendors embed GenAI, and our own products touch real patients. At Viz.ai, responsibility isn’t just how we build AI—it’s how we use it and who we trust to build with. Patients First guides every decision: if a tool can’t serve patients safely, ethically, and transparently, it doesn’t belong in healthcare.
Responsible AI Everywhere
Today, responsible AI means examining every layer of the healthcare ecosystem—including our own stack—to ensure that sensitive data is handled securely (from encrypted inputs to isolated training environments), outputs are tightly controlled and context-appropriate, HIPAA and other privacy regulations such as the California Consumer Privacy Act (CCPA), the General Data Protection Regulation (GDPR), and others are fully met, and vendors adhere to strict transparency, auditability, and data-governance standards.
In a regulated space like healthcare, these measures aren’t optional—they’re the baseline for safe, trustworthy innovation.
Trust Before We Use
When a team at Viz.ai wants to introduce a new tool—whether for productivity, analytics, or automation—we run it through a structured approval process. Reviews go deep, reflecting our Quality Squared value and our commitment to defense in depth and privacy in depth.
I personally sit on our AI review board, where every GenAI tool—whether developed in-house or brought in—is scrutinized for risk factors including model drift, misuse potential, and opaque decision-making. No tool gets greenlit unless it meets our multilayered security thresholds and aligns with our Patients First values.
We examine data handling (what’s collected, encrypted, and isolated), access control (who can see what), model behavior (training methods and risks), compliance (HIPAA, SOC 2, GDPR), and visibility (audit logs and monitoring). This checklist is only one slice of a wider, end-to-end evaluation that keeps security, privacy, and operational integrity front and center.
Fostering Awareness
Responsible AI is also cultural. Everyone at Viz.ai has a role to play, starting with a clear grasp of AI’s limits and impact. Through security training, internal and external webinars, and hands-on guidance, we equip teams to use AI thoughtfully.
Our I’m Accountable mindset means responsibility doesn’t sit with one team—it’s shared across the organization.
Where We Go From Here
AI’s momentum shows no sign of slowing, and we’re still near the beginning of its journey. It will touch every facet of healthcare—speeding diagnoses, streamlining workflows, and freeing clinicians to focus on patients. But great power demands great care: in 2025, responsible AI isn’t optional—it’s indispensable for protecting privacy, reducing bias, and maintaining transparency.
According to the 2025 AI Governance Survey by Pacific AI, 75% of organizations have AI usage policies, but only 59% have dedicated governance roles, and fewer than half actively monitor AI systems for issues like accuracy and misuse. This gap underscores the urgent need for stronger oversight and shared responsibility.
We must pair rapid innovation with equally rapid guardrails. That means forging partnerships between regulators, developers, and care providers to set enforceable ethical standards, expanding AI literacy so everyone—from executives to frontline staff—understands both potential and pitfalls, and baking fairness and inclusivity into our models from day one.
Responsible AI isn’t just good practice; it’s how we earn the right to keep innovating. We don’t just secure AI—we shape the rules of engagement. That’s the future of tech leadership in healthcare.
How is your organization approaching Responsible AI? Let’s build a safer, smarter future together.