Software Engineering Principles

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    216,997 followers

    💎 Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development — to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isn’t about compliance. It’s not about ticking off checkboxes. And it’s not about plugging in accessibility overlays or AI engines either. It’s about *designing* with a wide range of people in mind — from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. It’s making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually don’t get time to work on features which don’t have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Don’t ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Don’t mistake smiling and nodding for support — establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, ↳ https://lnkd.in/ehGivAdY, by 🦞 Todd Libby ↳ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility

  • View profile for Raj Vikramaditya
    Raj Vikramaditya Raj Vikramaditya is an Influencer

    Building takeUforward(1M+ Users) | Ex - Google, Media.net, Amazon | YouTuber(900K+) | JGEC

    889,545 followers

    Yesterday, a reel flooded my DMs, featuring someone boasting about a fabricated production issue as if it were a badge of honor. For any college student or aspiring developer reading this, here’s a glimpse of how a typical production release works in a large organization like Amazon, especially for a customer-facing feature: - Feature Flags: Any new feature or change you push is almost always behind a feature flag. If the flag is enabled, the new code executes; otherwise, it defaults to the existing behavior. - Bug Bash: The team conducts a rigorous bug bash to identify and fix any glaring issues. - Quality Assurance (QA): Dedicated QA engineers test the feature across all critical user journeys, ensuring stability and functionality. - Gradual Rollout: The production rollout is phased: • Initially, only 1% of users experience the feature. • If no critical bugs are reported, the rollout progresses to a higher percentage (e.g., 10%, then 50%, and finally 100%). • In some organizations, this process involves releasing to alpha, beta, and general users, which follows the same principle. - Logs and Deployment Tracking: Every change or deployment is logged. This eliminates any ambiguity—no one needs to call or ask if a deployment occurred. A simple search in the deployment history provides all the details. - On-Call and Incident Management: In the event of an issue, on-call developers are the first to respond. If the new feature is causing the problem, they can disable the feature flag, instantly rolling back to the previous stable state. Key Takeaway: A proper production release is a systematic, collaborative, and well-monitored process. It’s not a playground for recklessness or boasting about mishandled issues. Be proud of delivering quality, not chaos. Keep learning, stay humble, and remember—the goal is to solve real problems, not create them. #striver #engineering

  • View profile for Allen Holub

    I help you build software better & build better software.

    32,150 followers

    Last night, I was chatting in the hotel bar with a bunch of conference speakers at Goto-CPH about how evil PR-driven code reviews are (we were all in agreement), and Martin Fowler brought up an interesting point. The best time to review your code is when you use it. That is, continuous review is better than what amounts to a waterfall review phase. For one thing, the reviewer has a vested interest in assuring that the code they're about to use is high quality. Furthermore, you are reviewing the code in a real-world context, not in isolation, so you are better able to see if the code is suitable for its intended purpose. Continuous review, of course, also leads to a culture of continuous refactoring. You review everything you look at, and when you find issues, you fix them. My experience is that PR-driven reviews rarely find real bugs. They don't improve quality in ways that matter. They DO create bottlenecks, dependencies, and context-swap overhead, however, and all that pushes out delivery time and increases the cost of development with no balancing benefit. I will grant that two or more sets of eyes on the code leads to better code, but in my experience, the best time to do that is when the code is being written, not after the fact. Work in a pair, or better yet, a mob/ensemble. One of the teams at Hunter Industries, which mob/ensemble programs 100% of the time on 100% of the code, went a year and a half with no bugs reported against their code, with zero productivity hit. (Quite the contrary—they work very fast.) Bugs are so rare across all the teams, in fact, that they don't bother to track them. When a bug comes up, they fix it. Right then and there. If you're working in a regulatory environment, the Driver signs the code, and then any Navigator can sign off on the review, all as part of the commit/push process, so that's a non-issue. There's also a myth that it's best if the reviewer is not familiar with the code. I *really* don't buy that. An isolated reviewer doesn't understand the context. They don't know why design decisions were made. They have to waste a vast amount of time coming up to speed. They are also often not in a position to know whether the code will actually work. Consequently, they usually focus on trivia like formatting. That benefits nobody.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    691,611 followers

    OAuth 2.0: Essential Best Practices for Developers in 2024 OAuth 2.0 Overview: OAuth 2.0 is an authorization framework that enables applications to obtain limited access to user accounts on an HTTP service. It's critical for separating authentication from authorization, allowing third-party applications to access user resources without exposing credentials. Key Components: 1. Authorization Server 2. Resource Server 3. Client Application 4. Resource Owner Flow Breakdown: 1. Client Initialization: The flow begins with user interaction in the client app. 2. Authorization Request: Client redirects to the Authorization Server. 3. User Authentication: Resource owner authenticates directly with the Authorization Server. 4. Authorization Grant: Server issues an authorization code to the client. 5. Token Exchange: Client exchanges the code for access and refresh tokens. 6. API Access: Client uses the access token to request protected resources. Best Practices Highlighted in the Infographic: 1. Authorization Code Flow:    - Implement for all redirect-based scenarios    - Crucial for maintaining security in web and mobile applications 2. Proof Key for Code Exchange (PKCE):    - Essential for mitigating authorization code interception attacks    - Particularly important for native and single-page applications 3. Refresh Token Handling:    - Rotate refresh tokens with each use    - Monitor for duplicate usage to detect potential token theft    - Invalidate tokens when user logs out or changes password 4. Scope Limitation:    - Minimize the scope of bearer access tokens    - Use fine-grained scopes to limit token permissions 5. Backend Security:    - Ensure client authentication in token exchange (Step 10 in the diagram)    - Use key-based authentication instead of shared secrets    - Securely encrypt and store access and refresh tokens 6. Frontend Considerations:    - Implement Authorization Code flow with PKCE for new projects    - Carefully manage refresh tokens in web apps    - Focus on mitigating XSS vulnerabilities 7. Native Client Guidelines:    - Prefer system browsers over embedded browsers for enhanced security    - Utilize OS-provided key stores for secure token storage Evolving Standards: The framework is progressing towards OAuth 2.1, which aims to consolidate best practices and enhance security. Staying informed about these developments is crucial for maintaining robust authentication systems. Implementing these practices not only enhances security but also promotes interoperability and user trust. As we continue to build interconnected systems, mastering OAuth 2.0 becomes increasingly vital for developers across all domains of software engineering. What challenges have you encountered implementing OAuth 2.0 in your projects? How are you preparing for the transition to OAuth 2.1?

  • View profile for David Pereira

    Turning PMs from Backlog Managers into Value Maximizers | 100X PM Mastermind | Hands-On Workshops • Untrapping Product Teams

    88,166 followers

    Fixing production bugs is 640x more expensive than during coding. 4 ways to transform how you handle bugs. The later you identify a bug, the more expensive it becomes. That’s why it’s important to design how you work to surface bugs faster. Here are four ways to increase your product quality. 1- Increase quality practices How you work determines the quality of your output. Simple practices can help you reduce bugs or identify them faster: . Prototype testing: Test the usability of your idea with users before implementing it . Code review: A second pair of eyes will help your team uncover undesired behaviors . Unit tests: The boring thing developers hate, but it prevents many future bugs . Automated tests: This one can accelerate tests and uncover undesired side effects . Dog food: Use your product to feel as much as possible like users 2- Delegate highly complex yet standard features Choosing what to delegate is wise. You don’t have to build commodity features. For example, SAML, SSO, and SCIM are immediately available with WorkOS, which can save you time and nerves. Focus on the core of your product and delegate standard features. That removes a heavy burden from your shoulders. 3- Get a software engineer specialized in quality as a role model Software quality can be complex. You can benefit from onboarding someone who’s done that and could act as a role model to other software engineers. Yet, I’m not recommending a classic QA engineer outside the team. I suggest that a team member focused on quality level up the expertise. A software engineer specializing in quality can mentor others who are less experienced. It’s an investment that quickly pays off when you realize your product is more stable and reliable. 4- Continuously review how to improve your work Every team has opportunities to improve. What worked yesterday may not work tomorrow. It’s key to step back and review what pushes you further and what holds you back. I recommend doing an overall quality review once a quarter: . How many bugs did you catch after release compared to the previous quarter? . How’s your test coverage compared to your previous quarter? . Which practices helped you the most? . Which practices slowed you down? . Where do software engineers struggle? Understand the status quo, agree on what to improve, take action, rinse and repeat. – Which other practices do you recommend to avoid painful bugs? Let’s rock the product world together.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    33,894 followers

    Teams will increasingly include both humans and AI agents. We need to learn how best to configure them. A new Stanford University paper "ChatCollab: Exploring Collaboration Between Humans and AI Agents in Software Teams" reveals a range of useful insights. A few highlights: 💡 Human-AI Role Differentiation Fosters Collaboration. Assigning distinct roles to AI agents and humans in teams, such as CEO, Product Manager, and Developer, mirrors traditional team dynamics. This structure helps define responsibilities, ensures alignment with workflows, and allows humans to seamlessly integrate by adopting any role. This fosters a peer-like collaboration environment where humans can both guide and learn from AI agents. 🎯 Prompts Shape Team Interaction Styles. The configuration of AI agent prompts significantly influences collaboration dynamics. For example, emphasizing "asking for opinions" in prompts increased such interactions by 600%. This demonstrates that thoughtfully designed role-specific and behavioral prompts can fine-tune team dynamics, enabling targeted improvements in communication and decision-making efficiency. 🔄 Iterative Feedback Mechanisms Improve Team Performance. Human team members in roles such as clients or supervisors can provide real-time feedback to AI agents. This iterative process ensures agents refine their output, ask pertinent questions, and follow expected workflows. Such interaction not only improves project outcomes but also builds trust and adaptability in mixed teams. 🌟 Autonomy Balances Initiative and Dependence. ChatCollab’s AI agents exhibit autonomy by independently deciding when to act or wait based on their roles. For example, developers wait for PRDs before coding, avoiding redundant work. Ensuring that agents understand role-specific dependencies and workflows optimizes productivity while maintaining alignment with human expectations. 📊 Tailored Role Assignments Enhance Human Learning. Humans in teams can act as coaches, mentors, or peers to AI agents. This dynamic enables human participants to refine leadership and communication skills, while AI agents serve as practice partners or mentees. Configuring teams to simulate these dynamics provides dual benefits: skill development for humans and improved agent outputs through feedback. 🔍 Measurable Dynamics Enable Continuous Improvement. Collaboration analysis using frameworks like Bales’ Interaction Process reveals actionable patterns in human-AI interactions. For example, tracking increases in opinion-sharing and other key metrics allows iterative configuration and optimization of combined teams. 💬 Transparent Communication Channels Empower Humans. Using shared platforms like Slack for all human and AI interactions ensures transparency and inclusivity. Humans can easily observe agent reasoning and intervene when necessary, while agents remain responsive to human queries. Link to paper in comments.

  • View profile for Dax Castro, ADS

    Accessibility Advocate | Trainer | IAAP ADS | Adobe-Certified PDF Accessibility Trainer | Keynote Speaker on Inclusive Design

    7,536 followers

    🚫 WCAG Levels Are Not a Grading Scale There’s a common misconception in digital accessibility: that WCAG levels A, AA, and AAA represent a “good, better, best” system. They don’t. ✅ WCAG levels are not about quality—they're about scope. • Level A addresses critical blockers for access. • Level AA covers common barriers that impact many users. • Level AAA includes enhanced requirements aimed at specific user needs—not a gold star for perfection. 🔍 Not every AAA criterion is feasible or appropriate for every website or document. That’s by design. AAA is not “better,” it’s more specific. If you got caught up in this misconception, I hope this brought some clarity. 💡 True accessibility is about meeting user needs, not chasing a letter grade. #DigitalAccessibility #WCAG #InclusiveDesign #AccessibilityEducation #A11y #UX #DocumentAccessibility #Chax

  • View profile for Ryan Peterman

    AI/ML Infra @ Meta | Writing About Software Engineering & Career Growth

    193,014 followers

    At my peak, I was landing an average of 5 code changes per day to prod. After over 1000 changes, I realized the bottleneck in landing code faster wasn't in writing it faster. The bottleneck is in waiting on code reviews. Here are 4 tips on how to get your code reviewed faster: 1. Break down your code - Each code change should have one main purpose. Breaking down your commits lowers the cognitive load for both you and the reviewer. This helps reviewers catch more bugs and review faster. 2. Build a bulletproof test plan - Thorough test plans actually save you time since your code will be accepted faster and you'll spend less time cleaning up breakages. For larger code changes, I like to include an E2E test, integration test, and rollout plan for my reviewers to comment on. 3. Preempt feedback - You know you're writing good code if you invite feedback yet receive none. Before you publish your code, reread it and preemptively address any feedback. This will improve your code quality and save you time in addressing comments. 4. Know your reviewer audience - Try to communicate what your reviewers need in less words. I often write a just sentence or two with the high-level motivation + a list of bullets about what this code change aims to accomplish. The easier the reviewer can understand the intent behind your change, the faster it will be reviewed. Anything else I missed that helps with getting code reviewed faster?

  • View profile for Pooja Jain
    Pooja Jain Pooja Jain is an Influencer

    Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

    181,842 followers

    Why should data engineers emphasize on the System Design concepts❓ I've been getting this question a lot. 𝗧𝘂𝗿𝗻 𝗖𝗵𝗮𝗼𝘀 𝘁𝗼 𝗖𝗹𝗮𝗿𝗶𝘁𝘆 - Solid system design is the backbone to transform data engineering solutions. System Design isn't just about building scalable pipelines; it's about having a solid foundation to ensure scalability, security, reliability, and future readiness. Ignoring them can cause risks failures, higher costs, and lost stakeholder trust. Some important system design concepts - 📍𝗕𝗔𝗦𝗜𝗖: Master load balancers, API gateways, CDNs, and database fundamentals—the building blocks of any scalable data platform. 📍𝗜𝗡𝗧𝗘𝗥𝗠𝗘𝗗𝗜𝗔𝗧𝗘: Level up with caching strategies, rate limiting, database sharding, and replication patterns for performance and availability. 📍𝗔𝗗𝗩𝗔𝗡𝗖𝗘𝗗: Conquer distributed systems with CAP theorem, consensus algorithms, message queues, service discovery, and comprehensive observability. Key architectural concepts to remember while designing the architecture: - 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - Leverage auto-scaling, horizontal/vertical scaling along with distributed computing to handle growth efficiently. - 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 - Optimize queries, caching results, and enabling parallel processing for speed. - 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 - Implement encryption, RBAC, network security, and audit logging to protect data assets. - 𝗖𝗼𝘀𝘁-𝗘𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗻𝗲𝘀𝘀 - Focus on resource optimization, ongoing cost monitoring, and lifecycle management to balance budgets. - 𝗗𝗮𝘁𝗮 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 - Set up validation, anomaly detection, and quality metrics to maintain trustworthy data. - 𝗠𝗲𝘁𝗮𝗱𝗮𝘁𝗮 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁  - Create data catalogs, lineage tracking, and schema evolution mechanisms to manage data context. - 𝗜𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - Design APIs, standard data formats, and cross-platform integration for seamless data flow with compatible data formats. - 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 & 𝗥𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝗰𝗲 - Emphasize on Fault tolerance, disaster recovery strategies, and high-availability setups for uptime. - 𝗠𝗮𝗶𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - Have modular architecture, observability, and automated testing to simplify updates and troubleshooting. Do not neglect these: ✅ 𝗗𝗲𝘀𝗶𝗴𝗻 - For scalable leads to bottlenecks as you grow. ✅ 𝗧𝗿𝗮𝗰𝗸 - Everything including metadata management  ✅ 𝗦𝗲𝗰𝘂𝗿𝗲 - Design with security to avoid breaches. ✅ 𝗕𝘂𝗶𝗹𝗱 - To monitor, save time and resources. Explore some informative references to master system design - - ByteByteGo(Alex Xu) SystemDesign book - https://lnkd.in/gGdgJRDd - Design Gurus - https://lnkd.in/gaphzp89 - DataExpert.io handbook by Zach Wilson -https://lnkd.in/gb4xBQJy - Donne Martin System Design primer - https://shorturl.at/mdbK5 - Neo Kim - https://lnkd.in/g966FSPk Image Credits: Shalini Goyal! Follow Pooja Jain for more on Data Engineering! #data #engineering #systemdesign #bigdata

Explore categories