top of page
Search

What We Look for Before Taking on an Automation Project

  • Dec 19, 2025
  • 6 min read

Sometimes we turn down projects at ARG. Not because we don't need the work, but because taking on the wrong project hurts everyone. The client spends money on something that won't deliver. We spend time on something that won't succeed. And failed projects make organizations skeptical of AI automation for years afterward.


Over time, we've developed a clear sense of which projects are likely to succeed and which aren't. Before we take on an engagement, we look for specific signals. When those signals are present, projects tend to deliver real value. When they're absent, even great engineering can't save the outcome.


This post describes what we look for and why it matters.


A Workflow That Already Exists


We automate existing workflows. We don't design new processes from scratch and then automate them.


This distinction matters because automation amplifies whatever it touches. If a workflow is well-understood, automation makes it faster and more consistent. If a workflow is poorly defined, automation makes the chaos more efficient.


Before taking on a project, we ask: can you walk us through exactly how this workflow works today? Who does what, in what order, using what tools? What are the inputs? What are the outputs? What happens when something goes wrong?


If the client can answer these questions clearly, the workflow is ready for automation. If the answers are vague or inconsistent ("it depends on who's handling it"), the first step is process documentation, not automation. Sometimes we help with that. Sometimes we recommend the client do it first and come back when the workflow is stable.


Sufficient Volume to Justify the Investment


Automation has a cost. Engineering time, integration work, testing, deployment, ongoing maintenance. That cost needs to be justified by the value delivered.


We look for workflows with enough volume that automation creates meaningful leverage. A task that happens five times per week is rarely worth automating. A task that happens fifty times per day almost always is.


Volume also affects how quickly we can iterate. High-volume workflows generate data fast. We can see what's working, identify edge cases, and improve the system within days or weeks. Low-volume workflows take months to generate the same learning.


There's no hard threshold, but as a rough guide: if automation won't save at least 20 hours per week of human time, the ROI is likely to be marginal. That doesn't mean we won't consider it, but we'll be honest about the economics.


Clear Inputs and Outputs


AI agents work best when they know what they're receiving and what they're supposed to produce.


Clear inputs mean the data arrives in a predictable format through a predictable channel. A form submission, an email to a specific inbox, a file uploaded to a specific folder, a record created in a specific system. When inputs are scattered across channels, formats, and systems, the integration work dominates the project and the automation itself becomes an afterthought.


Clear outputs mean there's a defined deliverable. A classification, a summary, a routing decision, a generated document, a record update. When the expected output is vague ("help with this process"), we push back until it's specific.


We also look at whether inputs and outputs are accessible. Can we read from the source system via API? Can we write to the destination system? If the answer involves screen scraping, manual exports, or emailing files around, the project becomes fragile. Sometimes these constraints are unavoidable, but we want to understand them upfront.


A Human Who Owns the Workflow


Every successful automation project has a human sponsor who owns the workflow being automated. This person understands how the work gets done today, cares about improving it, and has the authority to make decisions.


When we ask questions about edge cases, this person can answer. When we need test data, this person can provide it. When we propose changes to how the workflow operates, this person can approve them.


Projects without a clear owner stall. Questions go unanswered. Decisions get escalated to committees. Requirements shift because different stakeholders have different opinions. We've learned to confirm that an owner exists and is committed before starting work.

We also look for someone who will use the system after we leave. Automation isn't a one-time installation. It requires monitoring, occasional adjustment, and feedback when things go wrong. If nobody on the client side is prepared to own the system operationally, the project will decay after handoff.


Tolerance for Iteration

AI automation projects are not waterfall implementations. We don't write a specification, disappear for three months, and emerge with a finished product. We build incrementally, test against real data, and adjust based on what we learn.


This requires a client who is comfortable with iteration. Early versions will have gaps. Edge cases will surface that nobody anticipated. The system will make mistakes that reveal where refinement is needed. Clients who expect perfection on day one are consistently disappointed.


We look for signals that the organization can work iteratively. Have they run agile projects before? Are they comfortable with a pilot phase before full deployment? Do they understand that the first version is a starting point?


We also discuss error tolerance explicitly. What happens if the system makes a mistake? How bad is it? Can mistakes be caught and corrected, or are they irreversible? Organizations with zero tolerance for error are difficult to serve. Automation requires accepting that some errors will occur, especially early on, in exchange for the benefits of scale and consistency.


Access to Data and Systems


This one seems obvious, but it's surprising how often it becomes a blocker.

Before we commit to a project, we verify that we can get access to the systems and data we need. This means API access or database access to source systems, credentials and permissions for any tools the workflow touches, sample data that represents real production cases, and a test environment where we can develop without affecting production.


Access requests often take longer than expected. IT departments have security reviews. Vendors have approval processes. Legal has data handling concerns. We try to start these conversations as early as possible, ideally before the engagement formally begins.


When access is genuinely blocked, sometimes for good reasons, we discuss alternatives. Can we work with anonymized data? Can the client extract data and provide it to us? Can we build against a mock system and integrate later? These workarounds are possible, but they add risk and time. We want to understand the access situation clearly before scoping the work.


Executive Support


Automation projects change how work gets done. They affect people's jobs, sometimes eliminating tasks that employees have done for years. They require cooperation across departments. They need budget, not just for the initial build, but for ongoing operation.

Projects with executive support can navigate these challenges. Projects without executive support get stuck. An IT department that sees the project as a threat can delay access indefinitely. A middle manager who fears the automation will reduce their headcount can quietly undermine adoption. A finance team that didn't budget for ongoing costs can pull the plug after launch.


We look for evidence that leadership is committed. Has the project been discussed at the executive level? Is there a clear sponsor with budget authority? Does leadership understand that this is a change management initiative, not just a technology purchase?


We've learned that technical success doesn't guarantee organizational success. A beautifully engineered automation that nobody uses because of internal politics is still a failed project.


Realistic Expectations


Finally, we look for clients who have realistic expectations about what AI automation can and cannot do.


AI is powerful but not magic. It works well on tasks that are repetitive, high-volume, and rules-based, even if the rules are complex. It struggles with tasks that require deep expertise, nuanced judgment, or information the system doesn't have access to.

We're direct about limitations during initial conversations. If a client expects 100% accuracy, we explain why that's not realistic and what accuracy level they can expect. If they expect the system to handle every possible edge case, we explain why that's not practical and how we design for graceful escalation instead.


Clients with realistic expectations become partners. They understand that the first version will improve over time. They provide feedback when the system makes mistakes instead of declaring the project a failure. They celebrate meaningful improvements instead of fixating on the gap between reality and perfection.


Conclusion

Taking on the right projects is as important as executing them well. We look for workflows that exist and are well-understood, sufficient volume to justify the investment, clear inputs and outputs, a human owner who is committed, tolerance for iteration, access to data and systems, executive support, and realistic expectations.


When these elements are in place, projects succeed more often than they fail. When they're missing, even excellent engineering struggles to deliver value.


Being selective about which projects we take on allows us to do our best work on the projects we accept. That's better for our clients and better for us.

 
 

Recent Posts

See All
How to Measure ROI on AI Automation Projects

AI automation projects fail for many reasons, but one of the most common is that nobody defined what success looks like. Teams build impressive demos, deploy agents into production, and then struggle

 
 

© 2025 Algorithmic Research Group, Inc. 

bottom of page