The CIO Professional Network, the private CIO, CISO, and CTO community for The National CIO Review, gathered for another Roundtable discussion hosted by Jerry Heinz and William Novak, who shared how AI skills and workflow tools are being used inside business environments. Through live demos and open dialogue, the session focused on how leaders can build repeatable and governed AI systems that support daily operations.
The conversation presented AI as an operational layer that can be designed and audited much like any other enterprise technology system.
The group covered skill design, agent governance, workflow automation, version control, and enterprise AI adoption across different business functions. Examples such as daily news digests and calendar synchronization the discussion showed how purposeful AI workflows can create value when paired with strong guardrails.
Why It Matters: AI platforms present an opportunity to configure and connect to processes which, by extension, means leaders are facing new form of responsibility. The opportunity lies in the ability to automate repetitive work and improve responsiveness across teams. The responsibility is to ensure those systems are governed and built with the same discipline as any other enterprise application. This Roundtable emphasized that the real advantage comes from designing AI tools that are structured, repeatable, and aligned to business needs.
- Skills Turn General AI Into Purpose-Built Workflows: Attendees described that regardless of whether a platform calls them skills, custom GPTs, Gems, or Copilot agents, the idea is the same. These are configured workflows built for a specific task. Instead of relying on a fresh prompt each time, a skill gives the model a fixed role with instructions and an expected output structure. That makes results more consistent and more useful across repeated use. The group discussed how this makes AI more useful as a reusable process instead of a one-time chat tool.
- A Strong Skill Depends on Clear Structure: A clear framework for skill design also emerged during the discussion. The group was shown how a well-built skill gives the agent clear direction, enough background to understand the task, and firm boundaries around what it is allowed to access or do. It also gives the agent the ability to take specific actions when needed, whether that means searching for information from approved sources or carrying out a defined workflow. The attendees conferred on how all of these instructions can be organized in a simple markdown file, making the process far more approachable and allowing non-programmers to refine the skill over time.
- AI Can Help Teams Do More Without More Staff: One participant shared that his own exploration started when downsizing forced him to absorb more operational responsibility while also helping his team produce more. That led to automations for tasks like onboarding and log analysis, which later grew into more advanced AI-assisted workflows. Other leaders shared similar examples where they have seen how well-designed agents extend staff capacity to take on repetitive work that otherwise required more staff.
- Governance and Access Control Must Be Built In Early: Audience questions focused heavily on compliance and how to track what was actually being accomplished. Jerry described how his systems rely on tightly scoped permissions, defined tool access, and platform hooks that expose agent activity. In practice, this means agents only access approved contexts and systems, with permissions managed similarly to any other user or application identity.
- Version Control and Observability Support Safe Adoption: When asked about maintaining and improving these systems over time, one attendee pointed to Git-based versioning as an important safeguard. By working inside Visual Studio and setting the workflow to save changes into Git automatically, a version history is preserved, and unwanted edits can be rolled back. Attendees also described using monitoring and local observability tools to inspect agent behavior. Together, these controls help teams manage AI safely as workflows become more involved.
- Different Platforms Serve Different Business Needs: Examples showed how similar ideas can be adapted for a professional services environment. Using Claude to help generate instruction sets and then moving those into Copilot, members described tools for accounting news, contract management dashboards, HR policy support, and insurance policy reviews. These experiences suggested that organizations may need multiple AI platforms depending on the task and the systems already in place.
- Better Results Start With Better Requirement Gathering: Several attendees shared tips for improving agent design. It was recommended to ask the model to ask the user questions before building a skill, which helps uncover requirements that might otherwise be left unstated. Participants added that voice prompting is especially effective because people naturally provide more nuance when speaking than typing. It was also noted that limiting the number of questions a model asks can improve focus and keep the process moving.



