Every pharma company in India has a field force productivity problem. Not because their reps are not working hard. Most of them are. The problem is that the entire system for measuring field force performance has been built around the wrong question.
The question most companies are answering is: how much activity did our field force generate? Calls made. Doctors visited. Stockist beats completed. Samples distributed. Orders logged. These numbers flow into SFA systems and CRM dashboards and get reviewed in monthly performance meetings. Reps are ranked, incentives are paid, territories are evaluated, all based on this activity data.
The question that actually matters is different. It is whether that field effort is converting into downstream offtake. Whether the doctor visit led to a prescription. Whether the stockist call led to product actually moving to retailers. Whether the territory coverage is building market share or just generating logged visits.
Most pharma companies in India cannot answer that second question with any confidence. And because they cannot answer it, they are running their largest operational cost center, the field force, without knowing whether it is working.
The Activity Trap
The field force in Indian pharma evolved in an era when activity was the best available proxy for impact. You could not easily track whether a doctor visit led to a prescription. You could not link a stockist call to secondary offtake data. The distribution network was not digitized. So companies built systems that measured what they could measure, which was inputs, and assumed that more inputs would produce more outputs.
That assumption made reasonable sense when the alternative was no measurement at all. The problem is that the assumption has never been seriously tested, and the systems built on it have become deeply embedded in how companies hire, manage, and incentivize their field teams.
The result is a field force culture that has learned to optimize for activity metrics rather than market outcomes. A rep who makes 12 calls a day, logs them all correctly, and ensures his beat completion is at 95 percent is a good rep by every measure his manager has access to. Whether those 12 calls actually moved product is a question that rarely gets asked because the data to answer it is not available in a usable form.
This is not a rep-level failure. It is a systems failure. When you measure activity, you get activity. When people are incentivized on call rates and beat completion, they optimize for call rates and beat completion. That is rational behavior inside the system that has been created for them.
What Activity Metrics Miss
To understand what is lost when you measure activity instead of impact, it helps to think through what actually happens between a field rep's visit and a commercial outcome.
A medical rep visits a doctor. He detailes a product, leaves samples, maybe has a two-minute conversation about a clinical study. That visit gets logged. What does not get logged is whether the doctor started prescribing, whether prescriptions increased, whether the patients who received those prescriptions actually filled them at the pharmacy. The visit is captured. The downstream chain of events that determines whether the visit mattered is invisible.
A field rep visits a stockist. He checks stock levels, maybe pushes for an order, drops off scheme communication material. The visit gets logged. What does not get logged is whether the stockist placed the order, whether the stock he ordered actually moved to retailers in the next 30 days, whether the scheme communication reached any retailer at all. The activity is captured. The commercial outcome is not.
This gap would not matter much if activity and impact were strongly correlated. If every doctor visit reliably produced a prescription increase, measuring visits would be a reasonable proxy for measuring prescriptions. But that correlation is weak and highly variable. A rep visiting the same 10 doctors every month in a territory where those doctors have already prescribed everything they are going to prescribe is generating activity with no incremental impact. A rep who has identified three high-potential doctors who are not yet prescribing and is systematically building those relationships is generating impact that may not show up as high visit counts.
The activity measurement system cannot distinguish between these two reps. The impact measurement system can.
The Stockist Call Problem Specifically
The doctor-visit measurement problem is well understood in the industry, even if it has not been solved. The stockist call measurement problem gets less attention but is in some ways more consequential for commercial outcomes.
Most pharma companies have a significant portion of their field force doing stockist and retailer coverage work alongside or instead of doctor detailing. These reps are supposed to ensure product availability, push schemes through to the trade, collect secondary sales data, and manage the relationship with distributors and retailers.
The measurement system for this work is even more disconnected from outcomes than the doctor-call system. A rep logs that he visited a stockist. He may log the order that was placed. He may capture a rough estimate of secondary sales. But what actually happened in that territory in terms of retail availability, scheme reach, and offtake movement is not captured in any systematic way.
The practical consequence is that you can have a territory with 100 percent beat completion and still have 40 percent of your retailers inactive, your scheme awareness at retailers below 20 percent, and near-expiry inventory building at two of your largest stockists. All of that would be invisible in the standard field force reporting.
When you add secondary sales visibility to the picture, the same territory looks completely different. You can see exactly which retailers are active. You can see the offtake trend at the stockist level. You can correlate rep visit frequency to specific retailers with their ordering behavior. You can see whether the scheme that was supposed to be communicated actually showed up in retailer behavior.
That is when the activity-versus-impact question becomes answerable.
How You Actually Measure Field Force Impact
Measuring field force impact rather than just activity requires connecting two data layers that are typically kept separate: the rep activity data and the downstream commercial outcomes data.
On the rep side, you already have visit logs, call reports, order capture, and beat completion data. Most companies have this reasonably well covered through SFA tools, even if the data quality is inconsistent.
What you need to add is the outcomes layer. At the doctor end, that means prescription tracking, which is available through audited prescription data sources and is increasingly accessible at a more granular level than it used to be. At the trade end, it means secondary sales data showing actual stockist-to-retailer offtake, retailer activity rates, and scheme adoption at the counter level.
When you have both layers and can connect them geographically and temporally, specific questions become answerable. Does rep visit frequency to a particular class of doctors correlate with prescription movement in that micro-market? Which reps are generating prescription growth versus which are maintaining existing prescribers with no incremental impact? In the trade channel, which rep territories show the strongest correlation between stockist visits and secondary offtake? Where is field activity generating real market movement versus where is it just maintaining presence?
These are not exotic analytical questions. They are the basic questions any serious commercial leader wants answered. The barrier has always been data, not analytical capability.
The Incentive Design Problem
One reason this matters more than just for performance tracking is that field incentive design in Indian pharma is built almost entirely on activity and primary sales metrics. Reps are incentivized on call rates, on primary orders generated from their territories, on new product adoption by stockists. Some companies layer in doctor prescription targets, but these are often based on audit data that is too aggregated and too lagged to be actionable at the individual rep level.
The consequence is that incentive plans are systematically rewarding behaviors that may or may not be connected to what the company actually needs from its field force. A rep in a stockist-heavy role who drives high primary offtake but has no idea whether that product is reaching retailers or moving to consumers is hitting his targets while potentially contributing to channel stuffing. A rep who is patiently building retailer relationships and scheme adoption in a Tier 3 market may have lower primary sales numbers but is creating more durable commercial value.
When incentive plans cannot distinguish between these situations, they tend to reinforce the behaviors that show up easily in the metrics that are being measured. Activity goes up. Managed primary sales numbers go up. Underlying market quality, retailer breadth, scheme reach, actual consumer offtake, these either stagnate or erode slowly.
Fixing incentive design requires fixing the measurement system first. You cannot reward outcomes you cannot measure. But once you have secondary visibility connected to field activity data, you can design incentives around active retailer count in a territory, around scheme penetration at the counter level, around sell-through rates for stockists covered by a rep, around new retailers added to the active network. These are outcomes that directly reflect commercial value, and incentivizing them changes field behavior in ways that purely activity-based plans cannot.
What Changes When You Get This Right
The most immediate change when you connect field activity to impact measurement is that the performance conversation becomes honest in a way it currently is not.
Right now, a rep who is not generating market movement can hide behind activity numbers for a long time. Calls are made. Beat completion is high. The underlying commercial problem in the territory, the inactive retailers, the stagnant prescription base, the stockist who has been pushing the same inventory for six months, none of that surfaces clearly until it becomes a serious issue.
With impact measurement in place, the territory health is visible in near real time. A manager reviewing a rep's territory can see that visit frequency is high but active retailer count has not moved in three months. That conversation is now grounded in something concrete, not in a manager's gut feel or a rep's self-reported version of what is happening.
The flip side matters equally. A rep who is genuinely building market quality, adding new retailers, improving scheme penetration, driving real offtake growth, that work becomes visible and recognizable rather than invisible. Currently, reps who do the hard long-term work of building a territory can be indistinguishable in the metrics from reps who are just making calls. That is a retention problem as much as it is a measurement problem.
At the organizational level, connecting activity to impact changes where you invest. You stop deploying field effort uniformly and start deploying it based on where it generates the most commercial return. Some territories need more calls to doctors. Some need more trade coverage. Some need a fundamentally different approach. You can only figure that out when you can see what is actually moving.
The Question Worth Asking
The next time a commercial review happens and the field force productivity deck goes up on screen, it is worth asking a simple question: of everything being presented, how much of it tells us whether field effort is converting into market movement, and how much of it just tells us that the field force was busy?
If the honest answer is that it mostly tells you the field force was busy, that is the measurement problem to solve. Not because the field force is not working, but because without the right measurement, you cannot manage what you cannot see, and the difference between a field force that generates activity and one that generates market movement is the difference between an operational cost and a genuine commercial lever.
The data to answer this question is available. The connection between field activity and downstream offtake can be made. Companies that make it will manage their field force in a fundamentally different way than companies that are still counting calls.