Neural Networks can make predictions that violate basic physics or laws of thermodynamics if aimed only at minimizing a loss function. To fix this issue, ML scientists introduced PINNs - Physics Informed Neural Networks - where you penalize a neural network when it makes physically nonsense predictions. But what if you don’t know the full physics of a system? How do you penalize the neural network in that case? Universal Differential Equations (UDE) is the answer. I am writing this article in praise of this marvelous technique that is truly changing the way we are looking at how to bring science and ML together. Even a popular domain is emerging as a result: Scientific Machine Learning (SciML). Let us look at a spring-mass-damper system - a classic example in physics and engineering. Usually, it goes like this: mx''+bx'+kx=0 In a perfect world, these parameters m, c, k would be constants we measure in a lab. But in real life, your damper might behave non-linearly. So you may not know what the damping force is. That is where we can bring Universal Differential Equations into the picture. Instead of blindly trusting a neural network or strictly forcing your physical laws down the model’s throat, you merge them. In short, a UDE says: “I know some of the physics. Let me put that in. The rest that I don’t know? That’s the chunk I will replace with a neural network.” So how do we do it with the spring–mass–damper? A hybrid model: Part physics, part neural network. We know there is a second-order ODE term to account for acceleration and a ‘kx’ term for spring force. However, suppose, we suspect the damping force is not the usual linear form. Maybe it is more complicated, or partially unknown. mx''+kx+[unknown]=0 Now the “something unknown” becomes a learned function modeled by a neural network NN(θ). [unknown] = NN(θ) If you suspect a hidden/unknown effect, you can funnel that knowledge gap straight into the neural network term. Note that here the neural network is predicting the damping term. We want to predict displacement x(t). What does the UDE predict? The “neural network” alone is not the UDE. Because the UDE has to predict x(t) so that you can compare the predicted x(t) with experimental x(t) and define the loss. So how exactly does UDE predict x(t)? 1) Initial condition and experimental data fed to NN(θ) 2) Neural Network NN(θ) for the unknown term predicts damping 3) Combine with the known ODE: mx''+kx+NN(θ)=0 4) Numerical integration to predict x and x' 5) Compare predictions to experimental data 6) Back-propagation and optimization till you minimize the loss You have the final UDE model. I have made a lecture video on UDEs (for absolute beginners) on Vizuara’s YouTube channel. Do check this out. I hope you enjoy watching this lecture as much as I enjoyed making it: https://lnkd.in/gPWQuXHR
Engineering Project Cost Estimation
Explore top LinkedIn content from expert professionals.
-
-
Yesterday, we pitched at an angel syndicate's startup showcase with 14 other companies. Every single company used a top-down approach where SOM is a % of SAM is a % of TAM. I couldn't help but cringe. I was one of them a year ago. But not anymore! 😎 The problem with this approach is that you'll always end up with a market worth [insert $X million or billions]. It's devoid of any deep analysis. It doesn't question your ability or consider theoretical limits, like lead gen rate or conversion rates, that you'll encounter while trying to hit those numbers. For instance, you're probably not generating 1,000 leads per day for a B2B business, even at scale. So, it's about time we started questioning the age-old TAM-SAM-SOM approach. A better approach is bottom-up. You ask: "How big can I grow if Y number of customers pay us $Z?" It's a more pragmatic approach, focusing on your ability instead of some arbitrary %. When you think it through, you'll realize there are limiters like churn rate. This exercise grounds you in reality. I'm sharing our market size calculation in the comments (+ bonus video to learn more). It's been vetted by two institutional investors and includes neat tools like scenario analysis and funding round sizes. Not saying it's the best out there, but it's definitely a step up from what our grandfathers did. Copy and steal the template. No credit needed. Happy learning! 🥂 --- Edit: Since link got lost in the comments, here you go: https://lnkd.in/g2pK6BK6
-
Most small businesses default to two forecasting methods: top-down or bottom-up. But they both share the same problem. The "why" behind performance isn't explained. These approaches are easy to model and are used all the time. But they can easily fail as companies grow larger and more driver based. (1) Top-down forecasting Many companies favor top-down because it's simple and aligned with strategic goals. But the biggest drawback is it's often completely disconnected from an operational reality. I use it for high-level financial forecasting and hardly ever for operational planning. • Leadership sets growth or margin targets • The P&L is segmented into business units • These targets cascade down the statements • Line-items are forecast on high-level assumptions (2) Bottom-up forecasting Bottom-up forecasting is based upon detailed inputs such as sales to customers, sales by SKU, hiring plans by individual versus job category or department, expense budgets, etc. The benefit of bottoms-up is it's detailed and grounded in operations. But it's usually time-consuming, fragmented, and hard to roll up consistently. • Individual contributors come up with their numbers • They share it with an accountant or financial analyst • The accounting/finance person puts it into a model • The model is updated constantly with new details (3) Driver-based forecasting Rather than come up with high-level assumptions that don't tie into operations, or granular detail that doesn't separate signal from noise, driver-based combines the best of both. In this example for a professional staffing company, we can tie future revenue to placements per recruiter, contract duration, markup percentage, bill rates, and recruiter headcount. This allows FP&A the ability to flex operating assumptions, test them, and quickly see what can be done on the ground to influence. Differences between the 3 methods matter: Top-down may set revenue at $50 million based upon an 8% growth rate. We can ask "how do we increase growth?" Bottoms-up may set revenue at $50 million based upon a monthly forecast of 200 customers. We can ask "what do we expect from each customer?" Driver-based planning may arrive at the same $50 million but ask "what operational levers can we press to truly move revenue and margin?" The result is forecasts that are faster, more explainable and easier to update. 💡 If you want to explore next-level modeling techniques, join live with 200+ people for Advanced FP&A: Financial Modeling with Dynamic Excel Session 2. https://lnkd.in/emi2xFdZ
-
🎗️ #Hybrid_HAZOP and Quantitative Deviation Analysis (A Dynamic Approach) 1️⃣ Technology: #AspenTechnology and #Hybrid_Modeling 2️⃣ Use Cases: PHA 3️⃣ Value: Preclude the Risk of Un-Necessary Cost resulting from Qualitative 📖 Methods The integration of Aspen Technology and Hybrid Modeling brings a transformative approach to #PHA (Process Hazard Analysis), combining the strengths of process dynamics with the AI. This enables teams to quantitatively assess risks, precluding unnecessary costs that often arise from purely qualitative methods. 💡 Background The continuous evolution of #AI technology is driving improvements in operational excellence while prioritizing #ProcessSafety. This is what we do as R&D and consultants in #AspenTech. The Dynamic Hybrid Modeling approach exemplifies these advancements by uniting the capabilities of Process Dynamics and Industrial AI into a comprehensive framework. 💰 Success Story: ORYX GTL In a Naphtha Splitter system, the HAZOP process was enhanced with Dynamic Hybrid Modeling, addressing knock-on effects on system KPIs quantitatively. The primitive method involving HYSYS models, #HMB (Heat & Mass Balance), PFDs, and P&IDs were upskilled to enable the delivery of a Hybrid HAZOP (dynamic concept). Implementation Steps 1️⃣ Ensure a validated #HYSYS model for systems undergoing HAZOP/PHA. 2️⃣ Employ #Aspen_MultiCase to generate big data within system constraints and safe operating limits. 3️⃣ Utilize #AIMB to build neural networks from Jason files derived from #Aspen_MultiCase. 4️⃣ Run the #AIMB model in #HYSYS. 5️⃣ Switch to dynamics mode to examine all #HAZOP scenarios using Event Scheduler option in HYSYS Dynamics. 🔑 Value Delivered This systematic approach ensures an informed decision-making during safety reviews. 💰 Call to Action Avoid out-of-sequence engineering by following the above steps to streamline your safety sessions and achieve optimal results adopting Hybrid HAZOP #HybridHAZOP #PHA #AIMB #AspenMultiCase #HYSYS #AI #ANN #ROM #Dynamics #Cost
-
Most revenue models are built backwards. Finance picks a number. Sales divides by average quota. You end up with something like: “We need $40M, our quota is $1M per rep, so let’s hire 40 reps.” It looks tidy in a spreadsheet...and it almost never works in the real world. :) Why? Well, because this model assumes every rep: - Ramps on time - Hits 100% - Stays the full year Which is like assuming every Uber driver wins the Indy 500. Here’s a better way to build a revenue model: First off, stop treating quota as a fixed assumption, and start building around ramped capacity, rep variability, and reality. 1. Plan using RRE, not headcount. RRE = Ramp-Weighted Ramped Equivalents Forget how many reps you have. Focus on how many fully productive equivalents you’ll actually have in a given quarter. This accounts for: - Ramp time. - Attrition. - Variance in performance bands. That new rep you just hired? They're not a "1" in your model. They're a 0.2, then 0.4, then maybe 0.7 if you're lucky. Ten reps with half still ramping = 6.5 RREs. Not 10. 2. Build top down and bottom up...then reconcile. Top down: What makes the VCs happy? Bottom up: What's actually possible given productivity curves? When these numbers don't match (spoiler: they won't), you've found your strategic tension point. 3. Layer in performance bands. Not all reps hit quota. And that’s not failure. That’s just math. Try modeling based on realistic performance distribution: - Top 20% hit 120-150% - Middle 60% hit 70-90% - Bottom 20% hit 0-50% If your plan assumes everyone hits 100%, you’re either new here… ...or about to be. 😬 4. Bake in operational drag. Every revenue model looks clean...until enablement stalls, marketing underdelivers, or a region goes sideways. So you should build in a drag factor: - Deal slippage. - Hiring delays. - Funnel softness. - Internal execution risk. Don’t present worst case scenarios, but do plan for them. Some revenue leaders treat quota like a scoreboard, whereas you should treat it like an operating system. Don’t ask: “How many reps do we need to hit $40M?” Instead, ask: “How do we engineer the system to consistently produce $40M - with margin for error?” That’s the difference between running a sales org and running a revenue machine.
-
If you benchmark projects on €/kWp, you miss the point. The real metric is €/MWh. In practice, I keep running into the same discussions: How do you compare Project A (say, in Eastern Europe) with Project B (say, in Southern Europe), when grid, construction, O&M or financing have totally different cost profiles? Instead of arguing over individual cost items, there’s a simpler way: look at LCOE (€/MWh). What really matters (short & clear): --> €/kWp = construction indicator, but not a success factor. --> LCOE (€/MWh) captures CAPEX, OPEX, performance (PR/degradation), financing & lifetime. --> A “more expensive” project can deliver cheaper power thanks to higher yield, longer lifetime, or better financing. --> Investors and banks already benchmark on €/MWh, not €/kWp. Number flavor (utility scale, all-in incl. EPC, development, financing): -->Typical Utility Scale DE/CEE (2024): ~560–600 €/kWp all-in -->Project A: 580 €/kWp, PR 80%, WACC 6%, 25 years -> ~49-52 €/MWh -->Project B: 640 €/kWp, PR 87%, WACC 5%, 30 years -> ~40-43 €/MWh --> Same installed capacity, different assumptions –> output beats input. Do you still benchmark projects on €/kWp? Or already on €/MWh? And which 3 variables move your LCOE the most: PR, WACC, O&M, degradation? #AndreasBach #LCOE #SolarPV #ProjectFinance #CleanEnergy
-
CAPEX estimation for low maturity technology projects is challenging, particularly when we talk about new equipment. Yet, we still need to be able to get fairly accurate figures to justify the viability of the technology and secure funding for its development. How to do it? Here is what we usually do for hydrogen and carbon capture projects. 1. Define the Project Scope Start by clearly outlining all project boundary, objectives and deliverables. Identify every cost elements required for full scale implementation, from engineering and design to construction and commissioning, while distinguishing between one-off investments and those that can be standardised. 2. Develop the first-of-a-kind CAPEX Estimate • Detailed Bottom-Up Analysis: Break down the project into its individual components, accounting for bespoke engineering, pilot testing, specialized installations, and comprehensive project management. • Risk and Contingency: Due to the innovative nature and inherent uncertainties of FOAK projects, incorporate generous contingencies to cover design modifications, unforeseen challenges, and regulatory uncertainties. • Documentation: Maintain thorough records of assumptions and decisions made during this phase, as these will inform future projects. 3. Estimate to the nth-of-a-kind estimate with learning curves Leverage the insights from the FOAK phase to isolate repeatable cost elements. With each subsequent build, learning curves drive efficiencies: • Standardize Processes: As you replicate the project, streamline designs and processes. • Realize Efficiency Gains: Experience leads to better vendor relationships and operational refinements, translating into significant cost reductions for repeatable components. • Adjust Estimates: Update your cost models to reflect these improvements, using your own or reported learning curves, ensuring more accurate and lower capital expenditure projections for future projects. 4. Implement Continuous Improvement Regularly revisit and refine both FOAK and NOAK estimates. As more operational data becomes available, adjust your assumptions and conduct sensitivity analyses to maintain a robust, realistic capex projection. How do you estimate CAPEX for your technology? #Innovation #research #hydrogen #carboncapture #science #scientist #chemicalengineering
-
When you’re asked for an estimate and it’s treated like a deadline. A risk burndown can help your stakeholders understand the complexities involved in estimating the work. It enables teams to identify the unknowns and prioritise de-risking. 1) As a team, identify all the risks you can think of 2) Guesstimate how likely they are to happen (as a percentage) 3) Guesstimate how much effort would be involved if the risk happened 4) Multiply the likelihood by the effort to calculate the risk exposure 5) Add all the risk exposure totals together to get your overall risk exposure total 6) Plot this on a graph each week to track how your exposure changes over time. It’s not an exact science but it gives you, your team and your stakeholders a better understanding of the situation and inherent risks.
-
Sales Projections: Strategy or Speculation? Let’s be honest — I’ve seen far too many sales projections that look more like wishful thinking than strategic planning. A bold number on a slide — “We’ll hit $1M next quarter.” Everyone nods, the target is set, and the meeting moves on. But here’s the hard truth: A projection without a strategy is just a guess. I’ve learned this the hard way. Early in my career, I witnessed a team miss their quarterly target by a huge margin — not because they didn’t work hard, but because their projections were built on gut feel and blind optimism. No alignment between sales goals and actual pipeline health. No consideration for changing customer behavior or market dynamics. No breakdown of how deals would move through the funnel. It wasn’t a forecast — it was a hope-cast. So, how do seasoned sales leaders project with precision? It boils down to three strategic pillars: 1️⃣ Market-Driven Insights Your projections must start outside your company, not inside. What’s happening in your industry? How are customer priorities shifting? Is there economic turbulence or competitive disruption? Sales doesn’t operate in a vacuum — your projections shouldn't either. 2️⃣ Pipeline Precision A projection isn’t a random target — it’s a sum of its parts: How many deals are in each pipeline stage? What’s your historical win rate? What’s the average deal size and velocity? Bottom-up forecasting — where data, not hope, dictates the number — is the only way to build credibility. 3️⃣ Scenario-Based Planning Smart leaders never project a single number — they project a range: Best case: If high-value deals close faster than expected. Worst case: If key prospects stall or drop out. Most likely case: Where the current pipeline trends realistically point. This isn't playing it safe — it's playing it smart. What happens when you adopt this approach? Your sales team knows exactly what they’re working toward. Leadership has confidence in the numbers. You shift from chasing targets to executing a clear, strategic plan. Because at the end of the day — sales projections aren’t about predicting the future, they’re about engineering it. Would love to hear from my network — how do you balance optimism and realism in your sales projections? Let’s discuss. #SalesLeadership #StrategicProjections #RevenueGrowth #SalesStrategy
-
Imagine you're planning a simple trip to the grocery store. In the best-case scenario, you arrive, find a parking spot right in front, and there's no line at the checkout. In the most likely scenario, the store is a bit busy—you park a little further away, wait in a short line, but everything goes fairly smoothly. In the worst-case scenario, the store is packed, there are no parking spots, you wait in a long checkout line, and the item you need is out of stock. This everyday scenario illustrates the concept of three-point estimates, a valuable tool for planning tasks with uncertainty, particularly in software testing. In testing, whether you're estimating the effort needed for automation framework development, or regression test execution, considering three different outcomes—Optimistic, Most Likely, and Pessimistic—can provide a more realistic estimate. Let’s break it down with a work-related example. Suppose you're preparing a test strategy. If everything goes perfectly, it might take 8 days (Optimistic). If typical challenges arise, it could take 10 days (Most Likely). But if significant delays occur, it might take 12 days (Pessimistic). Using the formula for three-point estimates: The formula for calculating the estimate using these three scenarios is: E = (Pessimistic + 4 x Most Likely + Optimistic) / 6 Applying this to our example: E = (12 + 40 + 8) / 6 = 10 days This approach provides a balanced estimate, leaning towards the most likely scenario, while still considering the best and worst possibilities. While this method is more time-consuming and requires thorough documentation to avoid misunderstandings, it ultimately leads to more accurate and realistic project timelines. Have you tried using this technique in your projects? Please share your experience in the comments below. #SoftwareTesting #QualityAssurance #TestMetry #Estimation