Overview
SurePath AI is a network-level governance platform that enables safe adoption of Generative AI across the organization. This guide walks IT admins and stakeholders through the essential configuration steps required to deploy a SurePath AI tenant, from initial setup through advanced features like telemetry offload and private portal enablement.
The configuration process is designed to be flexible and iterative. Organizations can begin with basic monitoring to understand their GenAI usage patterns, then progressively implement more sophisticated controls and private AI capabilities as their governance strategy matures. This approach allows organizations to make data-driven decisions about policy enforcement rather than implementing restrictions without visibility into actual usage.
Throughout this guide, each section is marked as Required, Recommended, or Optional to help admins prioritize configuration tasks based on the organization's needs and timeline.
Base configuration
The initial configuration establishes the foundation of a SurePath AI deployment. These steps create the admin framework, establish user identity and authentication, and enable the network integration required to intercept and govern GenAI traffic.
Add admin users (Required)
Admin users are the foundation of a SurePath AI tenant configuration. Unlike end users who are imported through Directory Synchronization, admin users must be added manually and are assigned specific admin roles (Owner, Admin, or Auditor) that control their level of access to the platform.
When admins first access their SurePath AI tenant, they will log in using the Magic Link authentication method. As an initial Owner, the first task should be to add any additional admin users the organization requires. Admins should consider the organizational structure and determine who needs access to configure policies, review user activity, or audit the platform.
Learn more: Configuring Admin Users
SSO integration (Recommended)
Single Sign-On (SSO) integration enables end users to authenticate to SurePath AI using the organization's existing identity provider. This provides a seamless authentication experience, centralizes identity management, and ensures that user activity can be accurately attributed to specific individuals in audit logs and analytics.
SurePath AI supports both OIDC (OpenID Connect) and SAML 2.0 authentication protocols and works with major identity providers including Microsoft Entra ID, Okta, Auth0, PingIdentity, OneLogin, WorkOS, and Rippling. While not strictly required, SSO is recommended because it enables automatic user authentication when traffic is intercepted, creating a frictionless experience for end users. Without SSO, users will need to authenticate the Magic Link method, which may create friction and negatively impact the user experience.
Learn more: Configure SSO for Any Provider
Directory synchronization (Recommended)
Directory Synchronization uses the SCIM (System for Cross-domain Identity Management) protocol to import user and group information from directory services into SurePath AI. This enables group-based policy enforcement and ensures that the SurePath AI configuration stays synchronized with the organizational structure.
While admins can create and manage groups manually, directory synchronization is strongly recommended because it automatically maintains user and group membership as the organization changes. When employees join, leave, or change roles, those changes are reflected automatically in SurePath AI without manual intervention. Directory sync also enables admins to leverage existing security groups for policy enforcement—for example, using an "Engineering" group from the directory to grant engineering-specific GenAI access through a Group Policy in SurePath AI.
Learn more: Configure Directory Sync for All Vendors
Network integration (Required)
Network integration is the mechanism that allows SurePath AI to intercept, inspect, and govern traffic to public GenAI services. Without network integration, SurePath AI cannot apply policies to public services like ChatGPT, Claude, or Gemini. However, network integration is not required to access the SurePath AI admin interface (admin.surepath.ai) or the private portal (portal.surepath.ai)—these are always directly accessible.
There are two primary approaches to network integration: SASE (Secure Access Service Edge) and Proxy PAC (Proxy Auto-Configuration). SASE integration is the preferred method and uses vendor-specific forward proxy chaining (often called "forward-to-proxy") to direct only GenAI traffic to SurePath AI while leaving all other web traffic on its normal path. For organizations without a SASE deployment, Proxy PAC is an excellent alternative that uses Mobile Device Management (MDM) platforms to distribute a PAC file URL to endpoints. Both methods achieve the same outcome: only GenAI traffic is steered to SurePath AI for inspection and policy enforcement.
Learn more: Network Integration
Understanding the default configuration
After completing the initial configuration steps above, the SurePath AI tenant is configured to monitor and log all GenAI traffic by default. This is an intentional design decision that prioritizes visibility and data collection before policy enforcement.
By default, the Default Public Service Action is set to Allow - Monitor. This means that all workforce interactions with public GenAI services are allowed to proceed normally, but every interaction is intercepted, recorded, and analyzed by SurePath AI. Users experience no disruption or blocking—they can continue using ChatGPT, Claude, Gemini, and other GenAI tools exactly as they did before. However, SurePath AI is silently capturing a complete audit trail of all GenAI usage, including which services are being accessed, what prompts are being sent, what responses are being returned, and whether any sensitive data or risky content is being detected.
This monitoring and logging approach is valuable because most organizations don't have visibility into how their workforce is actually using GenAI. Without data, it's difficult to make informed decisions about which services should be allowed, which should be restricted, and what sensitive data controls are needed. Admins don't know what they don't know—and attempting to enforce policy without understanding actual usage patterns often leads to either overly restrictive policies that hamper productivity or overly permissive policies that fail to mitigate risk.
During this initial phase, admins should review the User Activity logs, analytics dashboards, and violation reports to understand:
Which GenAI services are being used most frequently
What types of prompts and use cases are most common
Whether sensitive data (PII, confidential information) is being sent to external services
Which teams or user groups are the heaviest GenAI users
What risks or policy violations are occurring
This data-driven understanding enables admins to transition from monitoring-only to active policy enforcement when they have sufficient evidence to create effective, balanced policies. There is no prescribed timeline—some organizations may feel confident after a week of data collection, while others may prefer a month or more. The transition should occur when admins believe they have enough information to make informed policy decisions that balance security, compliance, and productivity.
To begin enforcing policy, admins will modify the Default Public Service Action in the Default Policy settings and implement the Public Service Policy controls described later in this guide.
User activity and insights
Once network integration is in place and GenAI traffic is being intercepted, SurePath AI automatically captures and analyzes all workforce interactions with GenAI services. This creates two powerful capabilities for admins: User Activity for detailed event-level investigation and Insights for high-level analytics and business intelligence.
User Activity centralizes up to 30 days of governed GenAI events and allows admins to review who used which services, when, and for what purpose. Each event includes rich context such as user identity, risk level, policy outcome, intent classification, and full conversation history (when available). Admins can filter and search events, investigate specific requests in detail, and export CSV files for offline analysis. This granular visibility is essential for investigating policy violations, conducting security audits, and understanding how specific users or teams are leveraging GenAI.
Insights provides dashboard-based analytics that transform raw event data into actionable business intelligence. The Insights page includes four focused dashboards covering Public Services, Risk, Adoption, and Private Portal usage. These interactive dashboards help admins quantify sensitive data exposure, track adoption trends across departments, identify shadow AI usage, optimize licensing costs, and demonstrate ROI to executive stakeholders. Insights supports flexible filtering including unlimited look-back, drill-through capabilities to investigate underlying transactions, and automated PDF exports or email subscriptions to keep stakeholders informed.
Together, User Activity and Insights empower data-driven governance decisions. During the initial monitoring phase, these tools help admins understand baseline GenAI usage patterns before enforcing policy. Once enforcement is active, they provide continuous visibility into policy effectiveness, emerging risks, and adoption trends. Most organizations use User Activity for tactical investigations and compliance auditing, while using Insights to communicate strategic value to business leaders and guide investment decisions.
Learn more:
Telemetry offload (Optional)
Telemetry Offload enables organizations to export all SurePath AI user activity logs to a customer-managed AWS S3 bucket for long-term retention, SIEM integration, compliance archiving, and advanced analytics. By default, SurePath AI retains user activity logs within the platform, but many organizations require that security and compliance data be centralized in their own systems for regulatory compliance, cross-system correlation, and specialized reporting.
Telemetry Offload files are uploaded to the S3 bucket every 15 minutes in fully structured JSON format. To enable this feature, admins will configure an AWS Connector (which can be the same connector used for Private Models or a separate dedicated connector) and specify a Telemetry Destination that identifies the S3 bucket location where log files should be uploaded.
Learn more: Telemetry Offload Log Structure
Create public service policy (Recommended)
Once admins have gathered sufficient data from the initial monitoring and logging phase, creating a public service policy is the next critical step in the GenAI governance journey. A comprehensive Public Service Policy defines which public GenAI services the workforce can access, what content controls are applied, and how sensitive data is protected.
Public Service Policies are built using a least-privilege model with a Default Policy that applies to all users and optional Group Policies that grant additional access to specific user populations. The policy framework consists of three main components: Service Access Control (using the Public Service Catalog), Content Controls (High-risk Request, Confidential Data, Programming Language, Harmful Content, and Prompt Injection), and Sensitive Data PII Detection for personally identifiable information.
Most organizations begin with less restrictive actions like Monitor or Warn to build understanding and coach users, then progressively move to stronger enforcement actions as their governance maturity increases.
Learn more:
Private portal policy (Recommended)
The SurePath AI Private Portal provides a unified interface where the workforce can access private AI models that run in the organization's own cloud environment, governed by the same policy framework that controls public services. The key distinction between private and public AI is data residency and control—when users interact with private models through the portal, their prompts and model responses remain entirely within the organization's cloud environment. The organization controls the infrastructure, pays the cloud provider directly for inference costs, and maintains complete ownership of all data.
Access to the Private Portal is always directly accessible at portal.surepath.ai and does not require network integration or traffic interception. However, like public services, access to portal resources (private models, data sources, and assistants) is controlled through the same policy framework using the Default Policy and Group Policies.
Private models
Private models are GenAI foundation models that organizations operate in their cloud environment (AWS, Azure, or Google Cloud) and make available to the workforce through policy. To enable private models, admins must first create a connector that establishes authentication and authorization for SurePath AI to communicate with the cloud AI services. SurePath AI supports AWS (Bedrock) using an assumed role approach, as well as Azure OpenAI and Google Gemini using API key-based authentication.
The configuration process involves enabling the foundation model in the cloud environment, creating a connector in SurePath AI, and adding the specific private model to make available. Once configured, admins control which users or groups have access to specific private models through the Default Policy and Group Policies—just like they control access to public services.
Data sources and assistants
In addition to private models, the Private Portal supports advanced capabilities including Data Sources (private knowledge bases that provide proprietary context to model responses) and Assistants (pre-configured AI helpers with specialized prompts and data access). Both features are governed through the same policy framework, allowing admins to use Group Policies to grant specific users or groups access to targeted resources based on their role, department, or use case.
Policy-based access control
Just as admins use the Default Policy and Group Policies to control which public GenAI services users can access, they use the same policy framework to control Private Portal resources. For example, admins might enable a general-purpose private model in the Default Policy for all users, then use Group Policies to grant engineering teams access to an engineering-specific data source or give customer support teams access to a support-focused assistant. This unified policy model ensures consistent governance across both public and private GenAI access.
Cost considerations
When using private models, additional charges for model inference will be applied directly to the organization's cloud provider account (AWS, Azure, or Google Cloud), not to SurePath AI. Organizations should monitor cloud usage and costs as they roll out private model access to ensure they understand the financial impact.
Learn more:
Next steps
After completing the configuration steps in this guide, the SurePath AI deployment will be operational with a solid foundation for GenAI governance. Admins should consider these additional areas to deepen the implementation:
Review User Activity - Regularly examine user activity logs, analytics dashboards, and violation reports to understand adoption trends and refine policies
Customize Branding - Configure organization branding and messaging to provide a consistent user experience aligned with the organization identity
Explore Advanced Features - Investigate Data Sources, Assistants, and advanced policy configurations as the governance program matures
Establish Operational Procedures - Define processes for policy updates, user provisioning, incident response, and ongoing administration
Engage Stakeholders - Share insights from SurePath AI analytics with business leaders to demonstrate ROI and inform strategic AI adoption decisions
SurePath AI is designed to grow with the organization's AI governance needs. Admins should start with the fundamentals outlined in this guide, gather data, make informed decisions, and progressively implement more sophisticated controls as understanding and requirements evolve.
