From fast fills to field agents: how Fulcrum is building real AI for the field



To make dramatic progress in any endeavor, you need a vision, a target — a north star. Fulcrum is rolling out an industry-leading conception of Agentic AI that will help us navigate ongoing technological changes, bringing us and our customers to a fully hands-free, AI-guided experience for field workers.
We see opportunities for increased productivity, reduced frustration, and higher-quality field data in the creative application of AI agent technology to tough, field-specific problems. Better still, we’ve already begun: Voice, photo, and contextual inputs are already being layered into existing workflows, and customers are testing our natural-language query capabilities. As Fulcrum advances toward full Agentic AI, teams can modernize now, without waiting for future releases.
Key insights
AI has taken over plenty of office workflows. Fieldwork, though? That’s a different world. While email summarizers and document chatbots make headlines, field agents are still stuck juggling fragile apps, bad interfaces, and endless pen-and-paper entry.
Fulcrum sees that gap and is building toward something better. We’re focused on giving field teams the kind of intelligent support that actually speeds up work, improves data quality, and adapts to how tasks unfold in real-world conditions.
It starts by recognizing what AI can be trusted to do, knowing the work our customers do leaves no room for error. Things like listening to an inspector’s comments, or understanding that a picture of a power pole includes a broken insulator. AI’s strength in recognizing patterns makes it possible to ask questions in plain language and get useful answers, like workload summaries or suggestions to work more efficiently.
But AI “agents” can go far beyond responding when we tell them to do something. They can understand context, recognize how a process should work, and identify data that doesn’t look right. Ultimately, AI — the AI that lives at the heart of Fulcrum — will be able to guide field workers in all of their tasks, acting as a savvy scribe and a helpful, non-judgmental mentor.
With Agentic AI, the concept of a field app might disappear entirely. That’s how far the shift could go.
Here’s a video that follows a field worker as he uses smart glasses and Fulcrum’s Agentic AI capabilities to perform a power pole inspection. This is a vision of the future — but it’s not a future that’s far off.

The transition won’t happen in one jump. It will occur in increments, with each step delivering improvements to current workflows. Together, they create a path from manual processes to intelligent, responsive fieldwork systems. This roadmap starts with what Fulcrum already delivers and moves toward a future where intelligent agents actively support daily operations.
Status: Live
Fulcrum’s Audio FastFill allows field teams to complete forms by speaking naturally. It’s already in use across inspections, audits, and other forms of fieldwork. Spoken input is converted to structured data, reducing typing and helping teams capture more detail on the move.

What makes Audio FastFill powerful isn’t just its ease of use. It helps teams capture more data, more consistently. People naturally share more detail when they’re speaking than when they’re trying to thumb-type on a screen. That means richer records and better field intelligence with less mental friction.
Audio FastFill delivers a smoother way to get clean data without interrupting your workflow — and it’s the launchpad for what comes next.
Status: Coming soon
Photo FastFill builds on that voice-first foundation by adding real-time image capture into the workflow. Snap a photo, and Fulcrum’s AI reads the image, pulling asset IDs, labels, conditions, and other visual data to populate form fields instantly.
For example, during a pole inspection, you can take a photo of the utility tag and Fulcrum captures the asset ID instantly. That same image can also reveal the pole’s condition like cracks, rust, or damage, and auto-fill those fields too. Say, “Change the status to Needs Repair,” and Fulcrum updates the record on the spot. Photo FastFill blends voice input with photo intelligence to capture more context in less time, with fewer steps.

This feature is actively in development, shaped by real-world fieldwork. When it’s live, teams will get faster documentation, cleaner data, and need fewer steps to get the job done.
Status: In exploration and design
This is where Fulcrum becomes fully operable by voice. Instead of tapping through menus or typing into fields, field agents will be able to navigate apps, start forms, and complete records using natural speech.
Example commands include, “Open the Vegetation Management app,” or “Create a new record and copy from the last one.” Teams will also be able to use voice to add repeatable entries and media. Saying, “Add three attachments, a transformer and two lights,” or “Take a photo and add it to the pole field with the caption ‘Pole looks good,’” will trigger those actions directly.
Captured data is confirmed verbally so users can stay focused without needing to check each field manually. This advancement will serve to improve speed, reduce physical interaction with devices, and support hands-free fieldwork.
Status: In process
Here, Fulcrum develops a contextually-aware assistant that improves speed, clarity, and data quality during fieldwork. Prompts are driven by the field team’s location, the form in use, voice input, photos, and past records, giving the system a broader understanding of the task at hand.
The assistant can identify nearby assets and suggest appropriate actions. For example: “Looks like you’re near transformer T77. Want to start an inspection?” It can also flag values that seem out of place, prompt users when a required step is skipped, or suggest follow-ups based on task flow.

It also helps clarify ambiguous voice input, offering quick corrections like, “Did you mean 75kVA?” That keeps data clean without interrupting momentum.
Sequencing support also plays a role. After a Job Hazard Analysis is completed, for example, the assistant may prompt the user to begin the next step based on typical fieldwork sequences.
Underlying this capability are agent frameworks and multimodal models being designed to handle complex, field-driven scenarios. The focus remains on practical support, with prompts that are helpful, timely, and grounded in the job.
Status: In research and vendor evaluation
Here, Fulcrum introduces a fully immersive experience that combines AR hardware, voice input, and visual AI. Workers use AR glasses to capture photos and videos, view forms and asset data in a heads-up display, and interact with Fulcrum through speech instead of taps or swipes.
Forms appear in the user’s field of view alongside current status and context. Photos and video captured through the glasses are added directly to the record in progress. Visual recognition enhances this process by identifying field conditions and triggering real-time prompts.

For example, the system sees a utility pole structural issue and suggests taking a photo. That image is then used to autofill relevant inspection fields. As the Fulcrum agentic assistant continues to evolve, visual input becomes another source of context to guide next steps and prevent errors.
These tools will give field agents safer, faster ways to complete tasks while increasing the quality and consistency of the data they collect.
AI features are already making an impact in Fulcrum. Audio FastFill is active in the field today, helping teams finish records faster, capture better detail, and reduce rework. Teams use AI in fieldwork tasks like inspections, audits, and high-volume workflows — situations where typing slows everything down.
Photo FastFill is moving through development now, with early tests focused on extracting structured data from real-world images. The goal is faster documentation with less effort along with fewer mistakes that come from copying things down manually.
Voice-based app control is in build, giving teams the ability to launch apps, create records, and complete forms without touching a screen. We’re shaping contextual assistance around actual field patterns, surfacing the right prompt at the right time based on form type, past entries, and task flow.
AR workflows and visual recognition are being designed with partner input. These tools will deliver heads-up access to forms, guidance, and media capture to make work safer, more efficient, and more consistent. And finally, agentic AI capabilities are being shaped into smart agents that help with sequencing, validation, and exception handling, based on how fieldwork gets done.
All of this reflects Fulcrum’s vision to bring AI into the field in ways that are practical, intuitive, and genuinely useful.
Every workflow you define, every record your team completes, every photo or voice entry you capture — all of it builds the foundation for smarter fieldwork. The structure you put in place today becomes the framework Fulcrum’s agentic AI will use to support real-time decisions, catch critical gaps, and guide work forward.
You don’t need to wait for a future release. You can start modernizing field execution now, with tools that already capture structured data and connect directly to Fulcrum’s evolving AI capabilities.
When the agent arrives, it won’t start from zero. It’ll run on what you’ve already built.
We’ll show you what’s live today, what’s in motion, and how Fulcrum boosts your team’s performance. Your team moves faster, works safer, and stays ahead without changing the way they already get things done. Drop us a line to get started.
What AI capabilities are available in Fulcrum right now?
Audio FastFill is live and being used by field teams to complete records using voice input. It reduces typing and improves form completeness without changing existing workflows.
Will Fulcrum eventually support fully hands-free workflows?
That’s the direction Fulcrum is building toward. Voice-based navigation and form control are in development, aimed at reducing the need for manual interaction in the field.
Is AR part of Fulcrum’s product today?
Not yet. AR-based visual workflows are being designed and evaluated, including integration with smart glasses and voice input.
Will adopting Fulcrum require teams to change how they work?
No. Fulcrum’s roadmap is designed to enhance existing workflows. Each capability is added in layers, with teams continuing to work the way they do now while gaining more support over time.
How is Fulcrum’s approach to AI different from other platforms?
Fulcrum isn’t launching AI as a single release. Instead, capabilities are being delivered as they are developed, with each innovation tied directly to field conditions and focused on supporting real work, not generic automation.
What is Photo FastFill, and is it available yet?
Photo FastFill is in active development. It will allow Fulcrum to extract structured data from images, like labels or asset tags, and autofill relevant form fields.
How will Fulcrum’s contextual assistant work?
Fulcrum is developing a system that surfaces relevant prompts based on task flow, location, form type, and previous inputs. This assistant is still in development and not yet available.
What does Agentic AI mean in the Fulcrum roadmap?
Agentic AI refers to smart, task-aware systems that guide decisions and flag issues during fieldwork. Fulcrum is architecting this layer now, with agent capabilities planned for the future.
Why is structured data so central to Fulcrum’s AI roadmap?
Structured data enables every future AI feature. Forms completed today, especially those using voice and photos, create the logic and patterns that agent-based systems will rely on later.
Why should I adopt Fulcrum now if some AI features are still in development?
Because the work your team does now builds the foundation for what’s next. Data collected today will directly support upcoming features, making it easier to unlock future value without disruption.