🚀 One Product, Many Ways to Sell: CPQ vs. Revenue Cloud Advanced (RCA) In the Salesforce world, selling both one-time and subscription-based products often turns into a catalog nightmare. If you’ve built or managed a CPQ catalog, you know the pain — duplicate SKUs, tangled pricing rules, and messy renewals. But with Revenue Cloud Advanced (RCA), there’s a smarter way. 💡 The Challenge with Traditional CPQ In Salesforce CPQ, when a single offering (say “Premium Analytics”) is sold as both a one-time license and a subscription: You’d need to create two separate products (e.g., Analytics - Perpetual and Analytics - Subscription). Each gets its own price book entries, discount schedules, and renewal rules. Every change means double the work — and double the risk of misalignment. This tight coupling between product and pricing model is a major operational drag, especially as offerings evolve. 🧩 The RCA Advantage: Product Selling Models Enter Revenue Cloud Advanced, which introduces Product Selling Models — a game-changer for how you structure your catalog. With RCA: You define one product (e.g., Analytics Suite). Attach multiple Selling Models — One-Time, Term Subscription, or Evergreen. Sales reps simply pick the selling model at quote time. 👉 No duplicate SKUs. 👉 No redundant rules. 👉 No rework when pricing changes. ⚙️ Example: Bringing It Together Business case: You sell a software license that can be purchased outright or subscribed to monthly. Approach Setup Maintenance Renewal Behavior CPQ (Classic) Create 2 products (One-time & Subscription) Update both products & rules Renewal logic tied to subscription SKU RCA Create 1 product with 2 selling models Maintain 1 source of truth Renewal driven by selling model Result: A leaner catalog, cleaner pricing logic, and faster time-to-market. If you’re new to RCA: Start clean — model your offerings once. Define selling models up front. Let RCA handle subscription and one-time logic dynamically. ⚖️ The Big Picture CPQ = Product defines the selling type. RCA = Selling model defines how the product is sold. That simple shift drives: ✅ SKU consolidation ✅ Faster pricing updates ✅ Seamless renewals ✅ Smarter analytics #Salesforce #RevenueCloud #CPQ #SalesforceRCA #DigitalTransformation #QuoteToCash #SalesOps #BusinessTransformation
Managing Dynamic Catalogs
Explore top LinkedIn content from expert professionals.
Summary
Managing dynamic catalogs refers to organizing and updating product or data catalogs that change frequently, whether in e-commerce, software, or data platforms. These systems allow businesses and teams to keep listings accurate, flexible, and ready for updates without manual rework or risking data mismatches.
- Consolidate catalog entries: Set up your catalog so you can update product information or pricing models in one place, which helps cut down on duplicate records and confusion.
- Define storage locations: Always specify a storage path when creating new catalogs or schemas to maintain clear boundaries and control over your data.
- Automate updates: Use automated rules and real-time sync to ensure your catalog reflects current inventory, product variants, or price changes for smoother operations and accurate listings.
-
-
When I first started working with Databricks Unity Catalog, I wish someone had told me this simple but crucial detail about catalog provisioning and storage locations. If you’re about to set up a new catalog and the objects underneath, here’s what you need to know: ⁉️ What really happens when you create a new catalog? 💡 Sure, a new catalog gets registered in Unity Catalog. But if you don’t specify a storage root (location) during creation, Databricks will make it a managed catalog—and all your data will automatically land in the default metastore storage location. ⁉️ Why does this matter? 💡 Let’s say you move on to create a schema, again without specifying a storage path. That schema will also default to the metastore location. And when you create tables under that schema, unless you explicitly set a location, those tables will be managed tables (again), stored in the central metastore location. 🤷♂️ The hidden impact: If you’re building a data mesh or want clear data ownership boundaries, this can be a big deal. All your data across different catalogs, schemas, and tables ends up in a single, central storage account that you might not fully control. This can complicate data governance, access control, and cost allocation down the road. Also, it could result in too many API calls towards the same storage account which could lead to throttling as Azure enforces scalability targets (limits) on requests per sec for storage accounts. ✅ My tip on best practices for catalogs and schemas: 👉 Always specify the storage location when creating catalogs and schemas if you want true data isolation and ownership. 👉 Review your Unity Catalog setup to ensure your data lands where you expect it to! ☘️ Irrespective of the type of tables (managed or external) you're provisioning, make sure they land in the appropriate storage account otherwise their migration (in the future) will be hell of a task. ✅ My tip on best practices to avoid throttling (be futuristic): 👉 Use multiple storage accounts for different catalogs, domains, or high-traffic workloads. 👉 For blob, organise your catalogs, schemas and tables in a well-defined hierarchy. ⚠️ Trust me! Besides above points, it will result in a problematic situation if at any point of time, your team plans to migrate your UC External tables to Managed ones (I'll talk about it in a future post 😉). #Databricks #UnityCatalog #DataGovernance #DataEngineering #BestPractices
-
I grew up in a family that ran a mid-sized fashion retail business, stores, loyal customers, seasonal rushes, and the occasional stockroom chaos. While I took the corporate route, I stayed close to the backend of it all, especially when conversations shifted to e-commerce. The move online wasn’t smooth. We listed on marketplaces, ran ads on Google and Facebook but quickly realised how broken digital selling can be if your product catalog isn’t in shape. We faced it all: Google rejecting listings for bad titles, shopping ads running on sold-out products, campaigns showing variants that didn’t exist and hours lost figuring out why products weren’t live. Turns out, it’s not always the ads that fail. It’s the data behind them. These are the kind of headaches most e-commerce brands silently suffer through until something like Strique steps in. Strique is an AI-powered marketing platform built for e-commerce brands and their Catalog module Strique Feed Engine makes product listings cleaner, sharper, and campaign-ready across Google and Meta. Here are a few key features that stood out to me: ✅️ Real-time inventory sync with Shopify, so sold-out products stop showing up in ads ✅️ Smart product sets you can build using filters like price, tags, or stock levels, ideal for promos or retargeting ✅️ Bulk editing of product titles, descriptions, and prices, with a rollback option if needed ✅️ Listing optimization that rewrites product titles and descriptions for better visibility and clicks ✅️ Automated rules that keep your catalog clean like removing low-stock items without manual checks It solves a lot of the behind-the-scenes problems that quietly drain ad budgets and impact performance. It’s no surprise brands like Inc.5, Chemistry, and Crimzon have seen double-digit lifts in ROAS, CTR, and AOV. If you manage growth, catalogs, or have ever helped someone sell online, Strique is worth a look.
-
Meta has been rolling out updates at an incredible speed lately, especially in the creative area. Some new features I noticed today in different accounts: ・A new Advantage+ creative option called "Enhanced CTA" uses AI to pair key phrases from headlines with CTAs, in overlays for Stories. ・A "flexible aspect ratio" feature now allows Meta to display carousel images in different aspect ratios (for example 4:5, this can't be turned off!). ・AI-generated images created by default (though approval is still required). The changes are coming fast and often! ▶However, I believe the most significant update is to Commerce Manager’s data sources. You can now use multiple data sources for the same product data, like prices or images, and prioritize which source updates each attribute. For example: ・Images can be updated from a partner platform like Shopify, and if no value is found there, the system will use the value from the data feed. ・Sale prices can be updated from a data feed, but you can prioritize manual input if that data is available. Additionally, you can verify, for each item and attribute, how the updates are applied and which data source is being used. The system might seem more complex, but it offers much greater flexibility to improve product data. This feature works with partner platforms, feeds, APIs, manual edits, and automatically created catalogs from websites, giving you more control over catalog management. It looks like a major update to the product data management system is being rolled out. The system for managing product variants also seems to have received enhancements, though I’m not entirely certain yet. There may be other improvements that I haven’t come across so far. More details about the data sources configuration function can be found here: https://lnkd.in/ggtajB59
-
🎄𝐅𝐚𝐯𝐨𝐫𝐢𝐭𝐞 𝐃𝐚𝐭𝐚𝐛𝐫𝐢𝐜𝐤𝐬 𝐑𝐞𝐥𝐞𝐚𝐬𝐞𝐬 𝐟𝐨𝐫 𝐃𝐞𝐜𝐞𝐦𝐛𝐞𝐫🎄 Better late then never, here's my favorite relases from December! Time flies and I am already excited about digging deeper into the releases from January. 🔑 𝐔𝐧𝐢𝐭𝐲 𝐂𝐚𝐭𝐚𝐥𝐨𝐠 𝐌𝐀𝐍𝐀𝐆𝐄 𝐏𝐫𝐢𝐯𝐢𝐥𝐞𝐠𝐞 (𝐏𝐮𝐛𝐥𝐢𝐜 𝐏𝐫𝐞𝐯𝐢𝐞𝐰): Admins can now change ownership and manage permissions without being the current owner. This is a huge time-saver, especially in organizations where ownership changes frequently. No more waiting on the original owner to make adjustments—perfect for teams with dynamic roles. I know this is a time saver and for my and many other teams 🌐 𝐔𝐧𝐢𝐭𝐲 𝐂𝐚𝐭𝐚𝐥𝐨𝐠 𝐅𝐞𝐝𝐞𝐫𝐚𝐭𝐞𝐬 𝐭𝐨 𝐇𝐢𝐯𝐞 𝐌𝐞𝐭𝐚𝐬𝐭𝐨𝐫𝐞𝐬 𝐚𝐧𝐝 𝐀𝐖𝐒 𝐆𝐥𝐮𝐞 (𝐆𝐀): Federation allows Unity Catalog to connect to external systems metadata, and this release expands to include Hive Metastore and AWS Glue, enabling unified governance across multiple platforms. This is a improvement for companies working with different AWS or has legacy use of the hive metastore. This means that the metadata from these systems is visible directly in Unity Catalog, and you can enforce governance and access controls on that metadata without moving or duplicating data. 🔒 𝐂𝐫𝐞𝐝𝐞𝐧𝐭𝐢𝐚𝐥 𝐕𝐞𝐧𝐝𝐢𝐧𝐠 (𝐏𝐮𝐛𝐥𝐢𝐜 𝐏𝐫𝐞𝐯𝐢𝐞𝐰): Credential vending in Unity Catalog allows external systems to securely access data by generating temporary credentials for reading data from Unity Catalog external locations. These credentials are created on-demand, granting time-scoped access to specific storage locations. Credential vending is supported for external systems that can connect via the Unity REST API and Iceberg REST catalog. It is primarily designed for read-only access to Unity Catalog data. This ensures that external systems can interact with the data without compromising the governance and metadata management. 📁 𝐂𝐚𝐭𝐚𝐥𝐨𝐠-𝐋𝐞𝐯𝐞𝐥 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 𝐈𝐬𝐨𝐥𝐚𝐭𝐢𝐨𝐧 (𝐆𝐀): You can now enforce storage isolation at the catalog level. This means if you have already set a default storage path on the metastore, you can now change that for the whole metastore, or even just specific catalogs, giving you more control over data storage management. This improvement ensures better security, compliance, and flexibility across different datasets and departments. 🌉 𝐂𝐫𝐨𝐬𝐬-𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐕𝐢𝐞𝐰 𝐒𝐡𝐚𝐫𝐢𝐧𝐠 (𝐏𝐮𝐛𝐥𝐢𝐜 𝐏𝐫𝐞𝐯𝐢𝐞𝐰): Delta Sharing now supports sharing views in addition to Delta tables across platforms. This enables teams to collaborate securely on specific datasets or insights, while keeping raw data private. This is useful if you want to expose aggregated/filtered data without sharing the raw data. 📚 Read more: https://lnkd.in/dgYDzEeH What are your from favorite(s) last month? #DatabricksMVP #Databricks #UnityCatalog #SqlEditor #DataWarehousing
-
I’ve seen many organizations struggle with structuring catalogs within Unity Catalog! 👇 👇 👇 I would like to present my recommended approach for designing data domain catalogs. To establish a strong foundation, I recommend starting with three core catalogs: 🌀Sources: This catalog should contain domain-specific, source-aligned data. 🌀Derived: A flexible catalog for transformed, general-purpose data, supporting a wide range of applications. 🌀Customer-aligned: Here, you can focus on consumer-aligned data, optimized for specific use cases. In addition, I recommend creating two supplementary catalogs: 🌀Published: This catalog is vital for publishing data products and enforcing contracts on datasets, ensuring compliance, access control, and efficient data distribution. 🌀Sandbox: A dynamic space that enables ad-hoc analytics and exploration, providing a flexible environment for real-time data analysis and experimentation. 🎁 Each catalog can accommodate up to 10,000 schemas, which allows you to structure your data environments based on scale. If you anticipate exceeding this limit, you can create catalogs per environment or distribute data across multiple catalogs of the same type. 🎁 In a given region, you have a limit of 1,000 catalogs overall (though this is not a hard limit). Therefore, it’s essential to maintain an optimal schemas-per-catalog ratio in your design to maximize efficiency. 🎁 It’s also important to note that Sources, Derived, and Customer-aligned catalogs are not synonymous with Bronze, Silver, and Gold layers. For instance, Gold data might reside in a Sources catalog if you need to share that data directly with external users. If you’ve implemented a different structure for Unity Catalog in your organization, I’d love to hear about your approach.
-
EventCatalog Federation let's your teams manage their own documentation in distributed teams, and federation will combine all this into a single view. Teams like to keep docs/specs close to their code without having to leave and write content somewhere else (e.g wiki). This solution let's your teams have their own EventCatalog instance, where they can write their docs, and keep docs in source control (docs and code). Your teams can even automate their own docs using the SDK or our provided plugins. Your organization can then setup a central EventCatalog that pulls many catalogs together to give your organization a higher picture of your architecture. This let's developers, architects and business stakeholders quickly find content in your distributed system whilst keeping your teams focused with local documentation. I have created a new example where you can see how it works, and get started (if this is something that sounds interesting to you). https://lnkd.in/eu-59sVR Enjoy!
-
Most brands think catalogs are dinosaurs, but they're actually revenue rockets when done right. We recently worked with DTC darling Cozy Earth— an apparel powerhouse—to overhaul and modernize their direct mail program. (And we cut their catalog deployment from 6 months to 5 weeks and drove $3M+ in revenue with a 7.71x ROAS.) Here's the exact playbook that you can steal today... 🚀 CATALOG SPEED HACK: Traditional agencies take 6 months to deploy catalogs. We cut it to 5 weeks. How? By eliminating the "agency tax" – those pointless delays where your catalog sits in approval queues. Our AI built their prospect list in hours (not weeks). Our in-house printing saved 2+ months of vendor ping-pong. Result: 200,000 catalogs hit mailboxes before BFCM, driving 7.71x incremental ROAS. 📊 RETARGETING EVOLUTION: Email subscribers ghosting you? Catalogs fix that. Cozy Earth now reaches 3x more non-buyers with personalized postcards. They segment by 2-week windows (up to 90 days from subscription). Fresh creative every month keeps offers relevant. Result: Up to 6.53x incremental ROAS on people who ignored their emails. ♻️ RETENTION BREAKTHROUGH: "Our churn rate was lower when we looked at year-over-year performance. That's never happened." – Cozy Earth's Retention Manager They now automatically trigger postcards based on purchase recency. They reach BOTH subscribed AND unsubscribed customers too (try doing that with email). Up to 7 touch points over a year keeps their brand top-of-mind. Result: a mind-blowing 15.58x incremental ROAS on retention campaigns (yes incremental, tested via dynamic control groups). Is your brand still treating direct mail like it's 1999? Or worse – ignoring it completely? Drop a "📬" in the comments if you want to see the exact template we used to cut deployment time by 75%. #CatalogMarketing #DTCStrategy #DirectMailRevolution
-
𝗡𝗲𝘅𝘁-𝗟𝗲𝘃𝗲𝗹 𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲: 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗔𝗰𝗰𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝘄𝗶𝘁𝗵 𝗔𝗕𝗔𝗖 𝗶𝗻 𝗗𝗮𝘁𝗮𝗯𝗿𝗶𝗰𝗸𝘀 Hej! 👋 We're all familiar with Role-Based Access Control (RBAC) in Unity Catalog: GRANT SELECT ON a_table TO data_analysts. This works well, but what happens when the rules get more complex? What if only certain users can access columns with PII, or if access depends on the user's department? This is where RBAC reaches its limits and we need a more dynamic approach: Attribute-Based Access Control (ABAC). Instead of hundreds of static rules, ABAC allows us to define universal policies based on metadata—or attributes. How It Works in Practice (Simplified) 1. Tag Data with Attributes: First, we classify our data. We assign a tag to a column, table, or schema. - Example: We tag the email column with the tag pii_data = 'true'. 2. Assign Attributes to Users/Groups: We define attributes for our users or groups. - Example: The group finance_de receives the attribute department = 'finance'. 3. Define Rules Connecting Attributes: Now for the magic. We create a rule that uses these attributes. - Example Rule: "ALLOW access to all data tagged with pii_data = 'true' ONLY for groups that have the attribute clearance = 'level_3'." Why is this important? - Scalability: When a new employee joins the team, we just need to assign them to the right group with the right attributes. We no longer have to execute 20 different GRANT commands. Access rights are determined automatically by their attributes. - Dynamism: If the status of data changes (e.g., from "confidential" to "public"), we only need to change the tag on the table. All access rules adapt immediately and automatically. - Fine-Grained Control: ABAC enables extremely detailed control that goes far beyond table or schema boundaries. It is the key to securely managing sensitive data in large organizations. For me, ABAC is the logical evolution of data governance in the Lakehouse. We're moving from a rigid, object-based permission model to a flexible, policy-based system that grows with the business. Are you already using tags in Unity Catalog to classify your data? Or do you already have initial use cases for ABAC in mind? Share your thoughts! 👇 #Databricks #DataGovernance #UnityCatalog #ABAC #BestPractices #DataInsightConsulting
-
For delta lake we need setup on Databricks, managing multiple namespaces (schemas or databases) efficiently involves using Unity Catalog or the legacy Hive metastore, depending on your setup. Here’s how you can maintain three namespaces: 1. Using Unity Catalog (Recommended for Multi-Workspace Governance) Steps: 1. Enable Unity Catalog: Ensure your Databricks workspace is enabled for Unity Catalog. Set up a metastore that spans multiple workspaces. 2. Create Three Separate Schemas (Namespaces): CREATE SCHEMA IF NOT EXISTS catalog_name.namespace1; CREATE SCHEMA IF NOT EXISTS catalog_name.namespace2; CREATE SCHEMA IF NOT EXISTS catalog_name.namespace3; 3. Assign Permissions (Role-Based Access Control) GRANT USAGE ON SCHEMA catalog_name.namespace1 TO user_or_group; GRANT USAGE ON SCHEMA catalog_name.namespace2 TO user_or_group; GRANT USAGE ON SCHEMA catalog_name.namespace3 TO user_or_group; 4. Switch Between Namespaces USE catalog_name.namespace1; 2. Using the Legacy Hive Metastore If you're not using Unity Catalog, you can still maintain multiple namespaces in Databricks by creating and managing Hive databases. Steps: 1. Create Separate Databases CREATE DATABASE IF NOT EXISTS namespace1; CREATE DATABASE IF NOT EXISTS namespace2; CREATE DATABASE IF NOT EXISTS namespace3; 2. Use Specific Database USE namespace1; 3. Grant Permissions for Access Control GRANT SELECT ON DATABASE namespace1 TO user_or_group; 3. Organizing Data Using Mount Points (If Using External Storage) If you're working with Azure Data Lake (ADLS) or AWS S3, you can mount different storage locations for each namespace. Example for ADLS Gen2 dbutils.fs.mount( source="wasbs://namespace1@storage_account.blob.core.windows.net/", mount_point="/mnt/namespace1", extra_configs={"https://lnkd.in/dbV3NR4r": "your_access_key"} ) Then, use different mount points for different namespaces. Best Practices Use Unity Catalog if managing namespaces across multiple workspaces. Ensure proper role-based access control (RBAC) to restrict access. Use different Databases/Schemas for logical separation. For external storage, use mount points or external tables.