




February is our first clean look at how larger ecommerce businesses are carrying the 2025 story into 2026. The question is straightforward: are $50Mâ$100M and $100M+ businesses still buying growth on weaker unit economics, or are they starting to repair efficiency while they scale?
This piece focuses on two size bands:
All numbers cited are median year-over-year changes from Northbeam's February 2026 dataset.
In our 2025 review, most of the market-level topline expansion came from businesses above $50M in annual revenue. On a median basis:
The pattern was consistent: as you move up the revenue ladder, you see more spend, more revenue, and higher first-time CAC. Larger businesses were willing to trade efficiency for market share.
February 2026 tells us whether that posture is still the right one, or whether the biggest players are finally converting scale into healthier economics.

Across all businesses, median performance in February 2026 looked like this, year over year:
So the "typical" business is growing revenue but paying more for every new customer, with new-customer revenue slightly negative on the year.
Now look at the revenue-based size cut for February:
Both cohorts are growing faster than the market on topline, but neither is escaping pressure on new-customer acquisition economics.

For $50Mâ$100M businesses, February looks like a continuation of the 2025 trade-off, with more pressure on acquisition economics than the headline numbers suggest.
Spend up 16.37% and revenue up 14.83% is strong. But new-customer revenue is down 7.46%, meaning existing customers are doing most of the work. New-customer MER is down 17.65% and new-customer CAC is up 28.69%, meaning each new buyer is substantially more expensive and less efficient to acquire than a year ago.
If you sit in this size band, February should be treated less as proof that the 2025 playbook still works and more as a signal to harden guardrails before the economics deteriorate further.

The $100M+ cohort tells a more mixed story than the blended numbers suggest.
Revenue outpacing spend and blended MER up 7.40% reflects the leverage of a large, loyal customer base. But new-customer MER is down 6.45% and new-customer CAC is up 12.30%, consistent with the same market dynamic squeezing the $50Mâ$100M band, just less severely. New-customer revenue is at least positive (+2.49%), which separates this group from smaller peers, but the direction is the same.
For operators above $100M, the takeaway is measured optimism. Your retained customer base is a real asset. But treat new-customer metrics as a genuine constraint, not a lagging indicator that will self-correct.
January 2026 already showed an early split by company size in our monthly benchmarks.
$50Mâ$100M businesses pushed much harder in February, with spend and revenue growth accelerating significantly, but new-customer revenue flipped negative and CAC remained elevated. $100M+ businesses saw blended MER improve substantially, but new-customer MER swung from positive to negative and CAC worsened. At both size bands, acquisition economics tightened from January to February.
If you are in the $50Mâ$100M band, the priority is guardrails. New-customer MER down nearly 18% and CAC up nearly 29% is a warning level, not a target. Any channel or creative that isn't helping reverse negative new-customer revenue should be constrained or redirected toward LTV.
If you are in the $100M+ band, the priority is protecting what's working. Your repeat base is giving you blended efficiency that smaller businesses can't match, but new-customer acquisition is getting more expensive. Keep first-time MER and CAC as first-class KPIs in weekly reviews, and treat creative throughput as a genuine bottleneck at current spend levels.
From our 2025 review through February 2026, the story above $50M comes down to this: growth is available, but the new-customer acquisition environment is getting harder at every scale, and spending more doesn't automatically fix it.
The $50Mâ$100M band is most exposed, with new-customer revenue negative and CAC up nearly 30%. The $100M+ band has the better blended story, but new-customer metrics are heading in the same direction.
For Q2 2026, let economics dictate how hard you press. If you are under $100M, treat February as a ceiling on acceptable new-customer deterioration. If you are above $100M, treat February as a reminder that your repeat base is an asset worth protecting, not a reason to ease up on acquisition discipline.

In retail, small increases in basket size can generate meaningful revenue gains without adding new traffic.
UPT, or Units per Transaction, refers to how many items customers purchase in each completed sale. This guide explains what UPT is, how to calculate upt accurately, how it differs from related metrics like average order value, and practical strategies to improve it in stores and online.
Treated correctly, UPT is a powerful and controllable operational lever.
UPT, or Units per Transaction, measures how many items a customer purchases in a single transaction.
If 100 customers complete purchases and collectively buy 250 items, your UPT is 2.5. In simple terms, it tells you the average basket size in units, not dollars.
UPT is widely used across brick-and-mortar stores, e-commerce sites, and omnichannel retail environments:
Because it focuses on customers who already decided to buy, improving UPT can increase revenue without increasing traffic or acquisition costs. Instead of finding more shoppers, you help existing buyers purchase just one more item. Over time, those small increases compound into meaningful gains in retail performance metrics.

The formula for UPT is: UPT = Total Units Sold á Total Transactions
If you sell 1,200 items across 400 completed purchases, your UPT is 3.0. Meaning that, on average, each transaction includes three items.
âUnitsâ should be clearly defined. A unit typically means an individual sellable item (not a dollar amount). If a customer buys three shirts, that is three units. If you sell bundles or kits, you need a consistent rule. Some retailers count a bundle as one unit because it is sold as a single SKU. Others break it into its component items for internal analysis. The key is consistency.
A âtransactionâ refers to a completed purchase event. In-store, that means a finalized receipt. Online, it means a confirmed order. Returns complicate the picture: if an item is fully returned, most teams subtract that unit from total units sold in the reporting period. For partial refunds, ensure your system adjusts unit counts accurately.
Common mistakes include mixing gross and net units, failing to account for returns, or inconsistently treating bundles. Clear definitions and standardized reporting prevent inflated or misleading UPT figures.
UPT does not operate in isolation. It influences, and is influenced by, several other core retail KPIs:
Promotions, bundles, and threshold incentives often increase UPT. Price increases may reduce UPT if customers consolidate purchases.
If higher UPT is driven by heavy discounting or low-margin add-ons, gross margin per transaction may decline. Sustainable improvement requires balancing basket size with pricing discipline and margin performance.
Improving UPT in physical retail is largely about influencing behavior at the point of purchase through training, merchandising, and smart incentive design.
Store associates play a direct role in basket size. Training teams to suggest complementary products, ask open-ended questions, and make relevant recommendations can meaningfully increase units per transaction. The goal is not scripted upselling, but helpful guidance that enhances the customer experience.
Physical layout shapes buying behavior. Placing complementary products near each other, such as accessories next to core items, increases the likelihood of incremental add-ons. End caps, checkout displays, and curated displays also create natural opportunities to expand baskets.
Pre-packaged bundles simplify decision-making and encourage customers to purchase multiple items at once. Multi-buy offers, such as âthree for two,â can lift UPT while maintaining perceived value.
Threshold promotions, such as âbuy two, save,â can push shoppers to add one more item. However, guardrails are essential. Promotions should protect margin, avoid training customers to wait for discounts, and be measured carefully to ensure long-term profitability.

In e-commerce, UPT growth depends on intelligent recommendations, thoughtful incentives, and a frictionless buying experience.
On-site recommendations are one of the most effective digital levers. âFrequently bought together,â âComplete the look,â and in-cart add-on suggestions surface complementary items at moments of high intent. The key is relevance. Irrelevant recommendations reduce trust and lower engagement.
Digital storefronts make it easy to package related products into curated kits. Bundles reduce decision fatigue and increase perceived value while naturally increasing units per transaction. Clear pricing and visible savings strengthen performance.
Shipping thresholds encourage customers to add one more item to qualify for free delivery. When calibrated properly, this tactic lifts UPT without heavy discounting. The threshold should be based on average order value and margin targets.
Behavioral data enables personalized recommendations and dynamic cross-sell placements. Returning customers can see tailored suggestions based on browsing or purchase history, increasing basket expansion opportunities.
Too many pop-ups, irrelevant suggestions, or confusing layouts can backfire. UPT strategies should enhance the experience, not interrupt it. Clean design and seamless checkout preserve conversion while encouraging incremental units.
While increasing UPT can lift revenue, it is not always the right objective. Retail teams must weigh basket growth against margin, customer experience, and long-term loyalty.
UPT often rises through discounts, multi-buy offers, or low-priced add-ons. If those incremental units carry thin margins, gross margin per transaction may decline even as basket size grows. A higher UPT that erodes profitability is not sustainable.
Aggressive upselling or forced bundling can increase post-purchase regret. Customers who feel nudged into unnecessary items are more likely to return products, which distorts UPT reporting and increases operational costs.
Short-term basket expansion should not come at the expense of trust. Retailers that consistently prioritize helpful recommendations over pushy tactics tend to build stronger repeat purchase behavior.
If customers routinely remove recommended add-ons, abandon carts when bundles are required, or show rising return rates, your strategy may be too aggressive.
The most effective UPT strategies enhance value and convenience. When customers perceive recommendations as relevant and beneficial, basket size grows naturally without undermining brand equity.
UPT is most powerful when tracked alongside complementary metrics and governed with clear definitions and discipline.
UPT should never be reviewed in isolation. Pair it with:
Together, these metrics show whether UPT growth is healthy or masking underlying issues.
Regular analysis helps isolate what is driving change:
Segmenting results prevents broad conclusions based on isolated wins.
Strong governance protects data integrity and margin:
Clear ownership and standardized reporting ensure UPT becomes a strategic lever, not a vanity metric.
Improving UPT requires structured execution, not isolated tactics. A phased approach ensures sustainable results.
Begin by calculating current UPT across stores, channels, and key categories. Break results down by time period and customer segment to identify variability. This baseline creates benchmarks and highlights where performance is strongest or weakest.
Analyze where incremental gains are most realistic. For stores, this may include associate training or merchandising adjustments. For e-commerce, it may involve cart recommendations or free shipping thresholds. Prioritize initiatives based on potential lift, margin impact, and ease of execution.
Pilot changes in controlled environments. Test new cross-sell scripts in select stores or adjust recommendation placements on a subset of traffic. Measure UPT alongside AOV, margin per transaction, conversion rate, and return rate to ensure improvements are balanced.
If results are positive and sustainable, roll out initiatives more broadly. Document best practices, align incentives, and maintain consistent reporting. Continuous monitoring ensures UPT growth remains profitable and aligned with brand strategy.
UPT measures basket size and directly influences retail revenue efficiency. When customers add even one more item per transaction, the impact compounds across stores, channels, and time.
Accurate calculation requires consistent definitions of units and transactions. Without disciplined measurement, UPT can easily be overstated or misunderstood.
Sustainable UPT growth also depends on thoughtful merchandising, effective associate training, and seamless experience design. It works best when optimized alongside margin, conversion rate, and retention, ensuring that basket expansion strengthens profitability and long-term customer trust rather than eroding it.

Most data problems do not begin in dashboards, but in the logic that connects one system to another.
Data mapping is critical because it defines how fields align, how definitions are interpreted, and whether numbers can be trusted across tools. When mappings are unclear or inconsistent, even the most sophisticated reporting becomes unreliable.
In this guide, we will explain what data mapping is and why it underpins accurate measurement. Weâll share practical data mapping examples for marketing data and outline a structured approach to implementing and maintaining it over time.
The goal is for you to be able to treat data mapping as a repeatable discipline that keeps your data aligned as your systems, tools, and teams evolve.
Data mapping is the process of defining how fields from one system connect to fields in another. It answers practical questions like: Which source field populates this destination field? What format should it be in? Should it be transformed, normalized, or combined with other values?
In simple terms, data mapping creates the blueprint that allows systems to agree on what data means and how it moves.
Data mapping is not the same as ETL (Extract, Transform, Load) or reporting.
If the mapping is wrong, the pipeline may run perfectly and the dashboard may look polished, but the numbers will still be misleading.
And mapping errors do cascade. A mismatched identifier can break attribution; an inconsistent definition of revenue can distort performance reporting; a missing normalization rule can inflate counts across systems.
When mapping is ignored early, downstream trust erodes, and teams spend more time debugging than deciding.
A reliable data mapping process starts with structure and clarity. Each component builds on the next, ensuring that data moves accurately, consistently, and in a way that supports downstream reporting and decision-making.
Every mapping begins with clearly defined source systems. This includes marketing platforms, CRM, product databases, finance tools, and any system contributing data.
Just as important is ownership. Each source should have a responsible team or individual who understands how the data is generated, maintained, and changed. Without ownership, inconsistencies and silent schema changes go unnoticed.
Mapping requires field-level precision. Teams must define exact field names, formats, and data types. Is a date stored as a timestamp or string? Is revenue recorded in cents or dollars? Small mismatches in format or type can create large downstream discrepancies.
Not all data can be copied directly from source to destination. Transformation rules define how values are cleaned, standardized, combined, or filtered. Normalization ensures consistency across systems, such as aligning country codes, campaign naming conventions, or currency formats.
The destination schema defines how data should look once integrated. Clear definitions for metrics, dimensions, and relationships prevent ambiguity. A shared understanding of what each field represents reduces reporting disputes later.
Data mappings should be documented and treated as living artifacts. Version control ensures changes are tracked, reviewed, and communicated, preventing unexpected breaks in reporting.
Clear visualization turns data mapping from a technical artifact into a shared reference. The right format helps teams understand how data flows, where transformations occur, and how systems connect.
Here are a few best practices to keep in mind:
Different visualization types serve different purposes:
Tables are best for precision and documentation, while diagrams are better for communication and alignment. Early planning conversations often benefit from visual flow diagrams, while implementation requires detailed mapping tables. Mature teams typically maintain both.
As data integration mapping expands, mappings can become dense and difficult to interpret. Group related fields, separate logical domains, and avoid overcrowded diagrams. Modular documentation keeps complexity manageable.
Consistent naming conventions reduce ambiguity. Annotations should explain transformations, exceptions, and assumptions. Clear labeling prevents future confusion when systems evolve.
Visualization should not exclude business stakeholders. Simplified diagrams, plain-language definitions, and contextual notes help non-technical teams understand how data supports reporting and decisions.

Even well-designed mappings can break down over time. Most failures are not technical at their core; they stem from unclear definitions, shifting systems, or over-engineered logic.
Problem:
Solution:
Problem:
Solution:
Problem:
Solution:
Problem:
Solution
Strong data mapping does not just improve integration accuracy, it also improves measurable operational outcomes. When mappings are well designed and actively maintained, teams spend less time debugging and more time making decisions.
Core KPIs influenced by effective data mapping include:
Monitoring these indicators helps detect mapping weaknesses early. Beyond KPIs, teams should implement regular diagnostic checks, such as:
Mapping quality rarely fails loudly. Usually, it degrades quietly. Proactive measurement ensures small inconsistencies do not evolve into systemic reporting distrust.
Data mapping is not a one-time technical task; it requires ongoing governance to remain reliable as systems evolve and teams grow. Without structure and accountability, even well-designed mappings degrade over time.
Governance considerations should include:
Every data source and destination should have a defined owner responsible for accuracy, changes, and documentation. Ownership ensures someone is accountable when discrepancies arise and prevents silent updates from breaking integrations.
Schema changes, new fields, and transformation updates should follow a documented review process. Version control and change logs help teams understand what changed, why it changed, and how it affects downstream reporting.
Periodic audits validate that mappings still align with business definitions and system configurations. Refresh cycles allow teams to clean up deprecated fields, adjust logic, and ensure the mapping framework continues to support evolving reporting needs.
Governance transforms mapping from a fragile setup into durable infrastructure.

A strong data mapping workflow is iterative and collaborative. It moves from definition to deployment, then continues through monitoring and refinement.
Start by clarifying the use case and downstream requirements. What reports, dashboards, or models will depend on this mapping?
Identify source systems, required fields, definitions, and ownership. Clear scope prevents unnecessary transformations and keeps the mapping aligned with business needs.
Document how each source field connects to the destination schema. Define transformation rules, normalization logic, and key relationships.
This step should produce a structured mapping artifact, such as a field-level table or diagram, that can be reviewed by stakeholders.
Validate mappings before full deployment. Run reconciliation checks, confirm identifier integrity, and compare sample records across systems. Testing ensures that technical accuracy aligns with business definitions.
Move the mapping into production pipelines or sync jobs. Ensure version control and documentation are updated to reflect the deployed configuration.
After deployment, monitor data freshness, completeness, and report consistency. Investigate anomalies quickly to prevent silent degradation.
Effective mapping requires coordination across analytics, engineering, operations, and business teams. Shared visibility and structured review cycles reduce friction and build trust in the resulting data.
Data mapping tools and workflows determine whether systems align or quietly contradict each other. When mappings are thoughtfully designed and clearly documented, data moves with context and consistency instead of ambiguity.
Visual examples and structured artifacts make the logic understandable across technical and non-technical teams. Just as important, governance keeps mappings from degrading over time. Ownership, reviews, and validation routines ensure small changes do not turn into systemic reporting errors.
Strong data mapping is not just an integration task, but the very foundation for trustworthy measurement, reliable attribution, and confident decision-making.
