Главная
АИ #16 (302)
Статьи журнала АИ #16 (302)
A pre-implementation framework for evaluating analytics use-case viability

A pre-implementation framework for evaluating analytics use-case viability

Автор:

18 апреля 2026

Цитирование

Alekseeva I.. A pre-implementation framework for evaluating analytics use-case viability // Актуальные исследования. 2026. №16 (302). URL: https://apni.ru/article/14890-a-pre-implementation-framework-for-evaluating-analytics-use-case-viability

Аннотация статьи

Organizations invest substantial resources in analytics and machine learning projects that are never used or produce no measurable business impact. A root cause consistently identified in post-mortem analyses is that projects are initiated without structured evaluation of whether the use case justifies development in the first place. This paper addresses that gap by proposing the PIPA framework – a four-dimension pre-implementation assessment covering Problem Validity, Impact Potential, Adoption Readiness, and Asymmetry of Alternatives. The framework is operationalized as a scored instrument of eight scored questions producing a 0-16 point scale with defined decision thresholds. Application of the framework before any development commitment is estimated to reduce the proportion of initiated projects that fail to reach productive use. The contribution is both theoretical – providing a structured pre-implementation construct – and practical, offering a concrete diagnostic tool for AI project portfolio governance.

Текст статьи

1. Introduction

The corporate analytics and machine learning market continues its rapid expansion, yet a substantial share of initiatives still fails to reach productive deployment or fail to influence business decisions after deployment. Prior research has documented persistent challenges in deployment, adoption, and operationalization, suggesting that many failures are structurally predictable rather than purely technical.

The dominant response to analytics project failure in the practitioner literature focuses on post-initiation factors: improving data quality, strengthening MLOps practices, increasing user adoption through change management, or improving model interpretability [4, p. 317-342; 7, p. 425-478]. These are legitimate interventions, but they share a critical limitation: they are applied after the decision to build has already been made. If the fundamental premise of the use case is flawed – if the problem does not recur at sufficient scale, if no process can absorb the model output, if users will not change their behavior regardless of model quality – then no amount of technical or organizational excellence in execution will produce business value.

This paper argues that the most impactful intervention point in the analytics project lifecycle is the earliest one: the decision of whether to initiate a project at all. Despite the economic significance of this decision, to the author’s knowledge, literature lacks a widely used structured pre-implementation scoring instrument at the individual analytics use-case level. Existing research addresses deployment challenges [5, p. 1-29; 6, p. 2503-2511], technology acceptance after deployment [3, p. 319-340; 7, p. 425-478], and strategic alignment or project selection at the portfolio level [1; 2, p. 361-380], but none provides a scored pre-build diagnostic instrument applicable at the individual use case level.

The goal of this research is to fill that gap by proposing the PIPA framework – a four-dimension pre-implementation assessment instrument for analytics and machine learning use cases. The scientific novelty of the paper lies in the operationalization of pre-build viability assessment as a structured, scored instrument with defined decision thresholds, representing a contribution not previously made in the analytics project management literature.

2. Literature review

The literature relevant to pre-implementation uses case evaluation spans four domains, none of which individually provides what this paper proposes.

Project selection and portfolio management research [1; 2, p. 361-380] addresses the question of which projects to fund at the portfolio level, using criteria such as strategic alignment, expected ROI, and resource availability. This literature operates at the portfolio level and assumes that individual use cases have already been scoped and estimated. It does not provide a pre-scope diagnostic for individual use case viability, and it does not incorporate analytics-specific failure modes.

Technology acceptance research, represented primarily by TAM [3, p. 319-340] and UTAUT [7, p. 425-478], provides well-validated theory of the conditions under which users adopt information systems. These frameworks are post-design and post-deployment in orientation: they explain adoption outcomes as a function of system characteristics after the system has been designed and built. They do not provide pre-build assessment of whether those adoption conditions are likely to be met for a proposed use case.

AI and analytics project failure literature [1; 5, p. 1-29; 6, p. 2503-2511] has documented the most common causes of failure with increasing precision. Sculley et al. [7, p. 425-478] identified technical debt specific to ML systems. Paleyes et al. [4, p. 317-342] provided a systematic survey of deployment challenges. Brynjolfsson and McAfee [1] identified strategic misalignment as a root cause. These works collectively establish that most failures are predictable and attributable to identifiable factors – but they do not translate this insight into a pre-build instrument.

The gap is therefore specific: a structured, scored, pre-implementation instrument for evaluating analytics use-case viability at the individual project level, incorporating analytics-specific dimensions and producing actionable decision thresholds. The PIPA framework proposed in this paper fills this gap.

3. Theory

The PIPA framework is grounded in three theoretical constructs that jointly define the conditions for analytics project success.

The first is the decision-value chain, which holds that analytics creates business value through its effect on decisions and actions [8, p. 1163-1171]. A model that produces accurate predictions but does not change any decision produces zero business value, regardless of its technical quality. This construction implies that use case evaluation must begin with the decision, not the data or the model: if there is no decision that will change, there is no value to be created.

The second construction is adoption as a prerequisite for value realization, drawn from the technology acceptance literature [, p. 319-3403; 7, p. 425-478]. Even when a use case has genuine decision value, that value is only realized if the intended users adopt the system. Adoption is not a post-deployment problem to be solved; it is a pre-build condition to be assessed. If the conditions for adoption are not present before development begins – compatible workflow, user trust in algorithmic recommendations, sufficient understanding of the problem – they are unlikely to materialize after deployment.

The third construction is the principle of solution parsimony, which holds that the simplest solution that achieves the required decision quality improvement is preferable to a more complex one. This construct implies that before committing to an ML solution, the evaluator must explicitly assess whether simpler alternatives – business rules, threshold alerts, process changes, or human expert consultation – can solve the problem sufficiently. Choosing ML by default when a simpler solution would suffice is a form of over-engineering that increases cost, timeline, and failure risk without increasing value.

These three constructs jointly define the four dimensions of the PIPA framework: Problem Validity (the decision exists and is real), Impact Potential (the decision is important enough and the process can absorb the output), Adoption Readiness (users will actually use the system), and Asymmetry of Alternatives (ML is the right solution type for this problem).

4. Results

4.1. Framework overview

The PIPA framework evaluates a proposed analytics or ML use case across four dimensions prior to any development commitment. Each dimension is grounded in a core question that must be answered affirmatively for the use case to be viable. Table 1 presents the framework dimensions, their core questions, key sub-questions, and the primary failure signal associated with each.

Table 1

PIPA Framework: dimensions, questions, and failure signals

Dimension

Core Question

Key Sub-questions

Failure Signal

Problem Validity

Is this a real, specific, recurring decision problem?

Who decides? How often? What currently prevents a good decision?

Problem defined by data team, not by decision-maker

Impact Potential

Does solving this problem create measurable business value?

What changes if the model is right? Can the process absorb the output?

Value exists only in theory; no process changes planned

Adoption Readiness

Will the intended users actually use the system?

Do users trust algorithmic recommendations? Is the workflow compatible?

No user involvement in design; end-users not identified

Asymmetry of Alternatives

Is building an ML system the right solution for this problem?

Can a simpler rule, alert, or process change solve 80% of the problem?

ML chosen by default without comparing alternatives

4.2. Dimension 1: Problem Validity

Problem Validity assesses whether the proposed use case is grounded in a real, specific, recurring decision problem experienced by an identifiable decision-maker. The most common source of analytics project failure at this dimension is that the use case is defined by the data or analytics team based on what is technically interesting or tractable, rather than by the decision-maker based on an experienced decision gap [8, p. 1163-1171].

A valid problem has three properties. It is specific: it can be articulated as a decision that a named individual or role makes at a defined frequency. It is recurrent: it occurs frequently enough that the cumulative benefit of improving decision quality justifies development and maintenance costs. It is currently suboptimal: there is evidence – not just assumption – that current decision quality is below what better information would produce.

The key diagnostic activity for this dimension is structured interviewing of the decision-maker, not the technical sponsor or analytics champion. If no decision-maker can be identified who experiences the problem, Problem Validity has not been established.

4.3. Dimension 2: Impact Potential

Impact Potential assesses whether solving the problem creates sufficient and measurable business value, and whether the organizational process is capable of absorbing and acting on the model output. The first component – value magnitude – requires that a quantified business outcome be definable: cost reduction, revenue increase, risk reduction, or quality improvement, with a baseline and a target.

The second component – process absorptivity – is frequently underestimated and can prevent technically sound models from creating business value in practice [5, p. 1-29; 8, p. 1163-1171]. A model that produces accurate recommendations cannot generate business impact if the surrounding process does not change to incorporate those recommendations. Process absorptivity must be assessed explicitly: who will see the output, in what system, at what point in their workflow, and what action are they expected to take?

If the honest answer is 'the output will be available in a dashboard that users can consult if they choose,' Impact Potential is low regardless of model accuracy. High Impact Potential requires that the model output be embedded in a workflow were acting on it is the path of least resistance.

4.4. Dimension 3: Adoption Readiness

Adoption Readiness assesses the pre-build conditions for user acceptance of the system. Drawing on TAM [3, p. 319-340] and UTAUT [7, p. 425-478], the two primary predictors of adoption are perceived usefulness and perceived ease of use. Both can be assessed before building through structured user research.

Perceived usefulness requires that the intended users experience the decision gap that the system is designed to address. If users do not recognize the problem, or believe they already solve it adequately, perceived usefulness will be low regardless of actual model performance. Perceived ease of use requires that the system output be interpretable, accessible within the user's existing workflow, and trustworthy – particularly in domains were algorithmic recommendations conflict with expert intuition.

A critical adoption readiness indicator is whether end-users have been interviewed about the proposed solution before development begins. In practice, analytics projects are frequently designed based on interviews with technical sponsors and business directors, not with the operational staff who will use the system daily. These are different people with different goals, different mental models of the problem, and different tolerance for algorithmic error [4, p. 317-342].

4.5. Dimension 4: Asymmetry of Alternatives

Asymmetry of Alternatives assesses whether a machine learning or advanced analytics solution is the appropriate solution type for the problem, relative to simpler alternatives. This dimension operationalizes the principle of solution parsimony and addresses a systematic bias in analytics project initiation: the tendency to choose ML solutions by default because the project is being evaluated by ML practitioners or because 'AI' solutions carry higher internal prestige.

The evaluator must explicitly ask: can a deterministic business rule, a threshold-based alert, a visualization, or a process redesign solve 80 percent of this problem at a fraction of the development and maintenance cost? If yes, the ML solution may still be preferable at the margin – but that preference must be justified, not assumed. Many use cases that are initiated as ML projects are better solved by a simple rule: if X > threshold, trigger Y action. The marginal improvement from an ML model over a well-designed rule is often smaller than anticipated, and the maintenance burden is substantially higher.

4.6. Scoring instrument

The PIPA framework is operationalized as a scored instrument of eight assessment questions, two per dimension, each scored 0, 1, or 2 based on defined criteria. The maximum score is 16. Table 2 presents the complete scoring instrument.

Table 2

PIPA scoring instrument

Assessment Question

0 (Not met)

1 (Partially met)

2 (Fully met)

A named decision-maker has been identified and interviewed

No owner identified

Owner identified, not interviewed

Interviewed; confirmed the problem

The decision recurs at least monthly at meaningful scale

Rare or one-off

Quarterly or low scale

Monthly or more frequent

Current decision quality is demonstrably suboptimal

No evidence

Anecdotal only

Data-backed evidence

A quantified business outcome is defined for success

No metric defined

Metric defined, no baseline

Metric + baseline + target defined

The process can absorb model output without redesign

Major redesign required

Minor adaptation needed

Output fits directly into workflow

End-users have been interviewed about the proposed solution

Not interviewed

Interviewed but not about this solution

Interviewed and confirmed utility

Users have prior experience with data-driven tools

No experience

Some exposure

Regular users of analytics

A simpler non-ML solution has been explicitly evaluated

Not considered

Considered informally

Formally evaluated and ruled out

The total PIPA score is the sum of scores across all eight questions, ranging from 0 to 16. Table 3 presents the decision thresholds and associated recommendations.

Table 3

PIPA score thresholds and decision recommendations

PIPA Score

Classification

Recommendation

Primary Risk

13-16

Strong candidate

Proceed to development planning

Low, standard project risks apply

9-12

Conditional candidate

Address specific gaps before committing budget

Medium, targeted mitigation required

5-8

Weak candidate

Redesign problem statement or solution approach

High, fundamental issues present

0-4

Do not build

Stop; redirect resources to problem discovery

Critical project likely to fail

The thresholds in Table 3 are calibrated to priorities the avoidance of Type I errors – initiating projects that should not be built – over Type II errors, declining projects that could have succeeded. This reflects the asymmetric cost structure of analytics project failure: the cost of a failed project includes not only direct development spend but opportunity cost, organizational trust damage, and the reduced willingness to invest in future initiatives. The cost of a declined project is the lost value of a use case that would have succeeded, which can be recovered in a future cycle with better preparation.

4.7. Application protocol

The PIPA assessment is designed to be completed before any development resource is committed – ideally at the point when a use case is being considered for inclusion in a project roadmap. The recommended application protocol comprises four steps. First, identify and interview the primary decision-maker – the person who makes the decision the system is intended to support. Second, map the current decision process, including what information is used, where gaps exist, and what action follows a decision. Third, evaluate simpler alternatives explicitly, documenting why they are insufficient if ML is to be justified. Fourth, score the eight questions jointly between the analytics lead and a business stakeholder, not by the analytics team alone.

The PIPA score should be reviewed at the project kick-off stage gate, with a record of which questions scored 0 or 1 and what mitigation actions are planned. This record serves as an early warning monitoring baseline throughout the project lifecycle.

5. Discussion

The PIPA framework addresses a gap that is well-established in the analytics project failure literature but has not previously been operationalized as a pre-build instrument. The most closely related existing tools are project selection matrices used in IT governance [2, p. 361-380; 8, p. 1163-1171], but these operate at the portfolio level and do not incorporate analytics-specific dimensions such as process absorptivity, algorithmic trust, or the explicit evaluation of simpler alternatives.

The framework's primary theoretical contribution is the operationalization of the decision-value chain as an assessment instrument. By requiring evaluators to identify a specific decision-maker, a specific decision, a specific process change, and a specific measurable outcome before development begins, the framework makes the implicit assumptions of analytics project initiation explicit and testable. This shifts the conversation from 'can we build this?' to 'should we build this, and for whom, and with what expected effect?'

The practical contribution is the creation of a decision threshold instrument that can be applied in a half-day workshop before any technical work begins. The resource cost of a PIPA assessment is trivially small relative to the cost of a failed project. Even if the framework prevents only one in ten projects from being incorrectly initiated, the return on assessment effort is substantial.

Several limitations of the framework should be acknowledged. First, the scoring logic and thresholds are derived from the author's practitioner experience and the qualitative patterns identified in the project failure literature, not from a statistically validated study of project outcomes. Empirical validation against a labelled dataset of completed analytics projects – with PIPA pre-scores and documented outcomes – is the necessary next step. Second, the framework does not address all dimensions of project risk: it focuses on use case viability, not on team capability, infrastructure readiness, or data availability. These dimensions require separate assessment instruments and are deliberately excluded from PIPA to maintain focus and usability. Third, the assessment questions require honest, evidence-based answers; if the assessment is conducted primarily to justify a predetermined decision to build, the instrument will produce misleading results. Governance structures that ensure independent or jointly conducted assessments are important for framework effectiveness.

Future research directions include empirical calibration of scoring thresholds against a dataset of completed analytics projects; extension of the framework to address data readiness as a fifth dimension; and integration of PIPA into existing stage-gate governance models for analytics programmed management.

6. Conclusion

This paper makes three contributions to the analytics project management literature. First, it identifies and documents a specific gap: the absence of a structured, scored, pre-implementation instrument for evaluating analytics use case viability at the individual project level. Second, it proposes the PIPA framework as a response to that gap, grounded in three theoretical constructs – the decision-value chain, adoption as a prerequisite condition, and solution parsimony – and operationalized as a four-dimension, eight-question, 16-point scoring instrument. Third, it provides actionable decision thresholds and an application protocol that can be used by analytics teams and project governance boards without specialist research training.

The central claim of the paper is that the most impactful investment in analytics project success is made before development begins. No governance intervention after project initiation can recover the resources spent on a use case that was never viable. The PIPA framework provides the structured instrument needed to make that pre-initiation judgement with evidence and rigor rather than optimism and assumption.

As analytics investment continues to scale in organizations of all sizes, the ability to distinguish use cases that should be built from those that should not become an increasingly valuable organizational capability. The PIPA framework is offered as a contribution to the development of that capability.

Список литературы

  1. Brynjolfsson E., McAfee A. The Business of Artificial Intelligence // Harvard Business Review. 2017. July 18. URL: https://hbr.org/2017/07/the-business-of-artificial-intelligence.
  2. Cooper R.G., Edgett S.J., Kleinschmidt E.J. Portfolio Management for New Product Development: Results of an Industry Practices Study // R&D Management. 2001. Vol. 31. No. 4. P. 361-380. DOI: 10.1111/1467-9310.00225.
  3. Davis F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology // MIS Quarterly. 1989. Vol. 13. No. 3. P. 319-340. DOI: 10.2307/249008.
  4. Mumford E. The Story of Socio-Technical Design: Reflections on Its Successes, Failures and Potential // Information Systems Journal. 2006. Vol. 16. No. 4. P. 317-342. DOI: 10.1111/j.1365-2575.2006.00221.x.
  5. Paleyes A., Urma R.-G., Lawrence N.D. Challenges in Deploying Machine Learning: A Survey of Case Studies // ACM Computing Surveys. 2022. Vol. 55. No. 6. Article 114. P. 1-29. DOI: 10.1145/3533378.
  6. Sculley D., Holt G., Golovin D., et al. Hidden Technical Debt in Machine Learning Systems // Advances in Neural Information Processing Systems. 2015. Vol. 28. P. 2503-2511.
  7. Venkatesh V., Morris M.G., Davis G.B., Davis F.D. User Acceptance of Information Technology: Toward a Unified View // MIS Quarterly. 2003. Vol. 27. No. 3. P. 425-478. DOI: 10.2307/30036540.
  8. Wieder B., Ossimitz M.-L. The Impact of Business Intelligence on the Quality of Decision Making: A Mediation Model // Procedia Computer Science. 2015. Vol. 64. P. 1163-1171. DOI: 10.1016/j.procs.2015.08.599.

Поделиться

9
Обнаружили грубую ошибку (плагиат, фальсифицированные данные или иные нарушения научно-издательской этики)? Напишите письмо в редакцию журнала: info@apni.ru

Похожие статьи

Другие статьи из раздела «Информационные технологии»

Все статьи выпуска
Актуальные исследования

#17 (303)

Прием материалов

18 апреля - 24 апреля

осталось 6 дней

Размещение PDF-версии журнала

29 апреля

Размещение электронной версии статьи

сразу после оплаты

Рассылка печатных экземпляров

13 мая