Not all AI agents are created equal — and the framework you choose shapes your system's intelligence, adaptability, and real-world value. As we transition from monolithic LLM apps to 𝗺𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀, developers and organizations are seeking frameworks that can support 𝘀𝘁𝗮𝘁𝗲𝗳𝘂𝗹 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴, 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝘃𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴, and 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝘁𝗮𝘀𝗸 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻. I created this 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗖𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻 to help you navigate the rapidly growing ecosystem. It outlines the 𝗳𝗲𝗮𝘁𝘂𝗿𝗲𝘀, 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝘀, 𝗮𝗻𝗱 𝗶𝗱𝗲𝗮𝗹 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 of the leading platforms — including LangChain, LangGraph, AutoGen, Semantic Kernel, CrewAI, and more. Here’s what stood out during my analysis: ↳ 𝗟𝗮𝗻𝗴𝗚𝗿𝗮𝗽𝗵 is emerging as the go-to for 𝘀𝘁𝗮𝘁𝗲𝗳𝘂𝗹, 𝗺𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 — perfect for self-improving, traceable AI pipelines. ↳ 𝗖𝗿𝗲𝘄𝗔𝗜 stands out for 𝘁𝗲𝗮𝗺-𝗯𝗮𝘀𝗲𝗱 𝗮𝗴𝗲𝗻𝘁 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻, useful in project management, healthcare, and creative strategy. ↳ 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗞𝗲𝗿𝗻𝗲𝗹 quietly brings 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲-𝗴𝗿𝗮𝗱𝗲 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗮𝗻𝗱 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 to the agent conversation — a key need for regulated industries. ↳ 𝗔𝘂𝘁𝗼𝗚𝗲𝗻 simplifies the build-out of 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗮𝗴𝗲𝗻𝘁𝘀 𝗮𝗻𝗱 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗲𝗿𝘀 through robust context handling and custom roles. ↳ 𝗦𝗺𝗼𝗹𝗔𝗴𝗲𝗻𝘁𝘀 is refreshingly light — ideal for 𝗿𝗮𝗽𝗶𝗱 𝗽𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗶𝗻𝗴 𝗮𝗻𝗱 𝘀𝗺𝗮𝗹𝗹-𝗳𝗼𝗼𝘁𝗽𝗿𝗶𝗻𝘁 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀. ↳ 𝗔𝘂𝘁𝗼𝗚𝗣𝗧 continues to shine as a sandbox for 𝗴𝗼𝗮𝗹-𝗱𝗿𝗶𝘃𝗲𝗻 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝘆 and open experimentation. 𝗖𝗵𝗼𝗼𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝗵𝘆𝗽𝗲 — 𝗶𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝗮𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝘄𝗶𝘁𝗵 𝘆𝗼𝘂𝗿 𝗴𝗼𝗮𝗹𝘀: - Are you building enterprise software with strict compliance needs? - Do you need agents to collaborate like cross-functional teams? - Are you optimizing for memory, modularity, or speed to market? This visual guide is built to help you and your team 𝗰𝗵𝗼𝗼𝘀𝗲 𝘄𝗶𝘁𝗵 𝗰𝗹𝗮𝗿𝗶𝘁𝘆. Curious what you're building — and which framework you're betting on?
Using AI For Task Management
Explore top LinkedIn content from expert professionals.
-
-
📊 What’s the right KPI to measure an AI agent’s performance? Here’s the trap: most companies still measure the wrong thing. They track activity (tasks completed, chats answered) instead of impact. Based on my experience, effective measurement is multi-dimensional. Think of it as six lenses: 1️⃣ Accuracy – Is the agent correct? Response accuracy (right answers) Intent recognition accuracy (did it understand the ask?) 2️⃣ Efficiency – Is it fast and smooth? Response time Task completion rate (fully autonomous vs guided vs human takeover) 3️⃣ Reliability – Is it stable over time? Uptime & availability Error rate 4️⃣ User Experience & Engagement – Do people trust and return? CSAT (outcome + interaction + confidence) Repeat usage rate Friction metrics (repeats, clarifying questions, misunderstandings) 5️⃣ Learning & Adaptability – Does it get better? Improvement over time Adaptation speed to new data/conditions Retraining frequency & impact 6️⃣ Business Outcomes – Does it move the needle? Conversion & revenue impact Cost per interaction & ROI Strategic goal contribution (retention, compliance, expansion) Gartner predicts that by 2027, 60% of business leaders will rely on AI agents to make critical decisions. If that’s true, then measuring them right is existential. So, here’s the debate: Should AI agents be held to the same KPIs as humans (outcomes, growth, value) — or do they need an entirely new framework? 👉 If you had to pick ONE metric tomorrow, what would you measure first? #AI #Agents #KPIs #FutureOfWork #BusinessValue #Productivity #DecisionMaking
-
🤖 MCP: What It Means For UX Designers. With practical guidelines on what it means and what designers now can do in AI products ↓ 🚫 LLMs can’t access real-time data without re-training. ↳ E.g. LLMs can’t answer "What’s the weather is today?". ✅ When a user asks it, RAG retrieves “fresh” context. ✅ It adds context to user’s query, sends it back to LLM. ↳ E.g. "What’s the weather is today?" + "Chicago, 54 F" 🤔 RAG is retrieval-only, can’t trigger actions or workflows. ✅ MCP gives AI real-time access to tools, data, actions. ✅ Any product can set up an MCP server for ChatGPT etc. ✅ It describes available tools, data sources, internal tasks. ↳ E.g. update calendar, send email, add a record, import. ✅ MCP = instruction manual telling LLMs how to use tools. ✅ User sends a query → AI looks up if any tool is a match. ✅ If needed, AI asks for permission to access an MCP server. ✅ It reads instructions, then triggers an action based on query. ✅ Users can access your tools via AI systems of their choice. MCP (Model Context Protocol) sounds like a merely technical feature that gives AI access to tools, actions and live data streams in your product. But what it actually provides is a way to integrate your products in any AI product that a user chooses to use. As Addy Osmani noted, for example, with Zapier MCP, an AI agent can perform any action that Zapier supports, from sending Slack messages, creating Google Calendar events, updating CRM records, to initiating e-commerce orders. And that’s what allows for fast automation with a pipeline of AI agents. If you are selling products, ChatGPT or Claude could filter, sort and showcase your products as users ask for them. With MCP, AI could access Jira, Notion and GitHub to provide real-time status updates. For sensitive data, users could access their private data with established privacy guardrails and access codes. AI agents could break down a complex task like booking a ticket into a series of small tasks and complete them by one, without ever visiting a website at all — if the platform provides an MCP server with access to that tool. As we added URLs for search engines to crawl, now we can add “features” for AI to use. This also opens the door for fine-grained personalization and customization. But it also requires transparency and control over how user’s data and user’s queries are flowing between AI agents and tools. And for designers, it probably means more interactions outside of the UI, and more integrations with AI. Meet the world of MCP-automated workflows — e.g. from Figma to code, design system maintenance and quick prototyping. That’s a quite profound change — and a change that might make user experience blazingly fast and perfectly seamless for users — and often without UI interactions at all. (Useful resources in the comments below ↓)
-
Many suggest time-blocking for learning. Am I the only one who can’t make it last? It usually works for a while, But life gets busy, stressful, and scheduled learning is the first thing sacrificed. Instead, I switched to 𝗛𝗮𝗯𝗶𝘁 𝗦𝘁𝗮𝗰𝗸𝗶𝗻𝗴: Pairing learning with habits I already can’t skip, like walking, commuting, or exercising. 𝗧𝗵𝗲 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗽𝗮𝗿𝘁, like coding or writing, is my first task in the early morning. This way, it can’t be missed. 𝗛𝗮𝗯𝗶𝘁 𝗦𝘁𝗮𝗰𝗸𝗶𝗻𝗴: Add what you need to habits you already have. It guarantees 100% consistency and creates extra learning time for me: Exercise: ~7 hours/week Dog walks: ~7 hours/week Commutes: ~10 hours/week Walks / runs: ~10 hours/week That’s 𝗮𝗻 𝗲𝘅𝘁𝗿𝗮 ~𝟮,𝟬𝟰𝟬 𝗺𝗶𝗻𝘂𝘁𝗲𝘀 𝗼𝗳 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗽𝗲𝗿 𝘄𝗲𝗲𝗸 (there are 10,080 minutes in a week). But wait, You can’t multitask. This learning isn’t effective. No, you can. What doesn’t work is task switching. But it does work well when on different channels. To make parallel learning effective, use 𝗩𝗼𝗶𝗰𝗲 𝗔𝗜 & 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗜 to: • Generate questions. • Explain concepts. • Ask you questions. • Evaluate your answers. Everyone can start in one minute: 1. Attach a document to ChatGPT. 2. Ask it to generate questions. 3. Walk and talk using Voice AI. 4. Let ChatGPT ask questions. 5. Answer them yourself. 6. Request feedback. This turns passive listening into active. And fits right into your day, no extra time needed. What are your strategies for consistent learning? ------------
-
A multi-AI agentic system is like a finely tuned orchestra, where each AI agent plays a vital part in creating a masterpiece. 🤖🤖🤖 A multi-AI agentic system is a sophisticated approach where multiple specialized AI agents collaborate under the coordination of a central orchestrator to accomplish complex tasks that would be challenging for a single AI to handle efficiently. In this architecture (image below), when a user provides a high-level goal—such as "Find a flight to New York"—the orchestrator AI agent receives this input and intelligently decomposes it into smaller, manageable subtasks that can be distributed among specialized agents, each designed with specific expertise and capabilities. For the flight booking example, a Planning Agent analyzes optimal flight options considering factors like time preferences and budget constraints, while a Booking Agent handles the actual reservation processes and interfaces with airline systems. Simultaneously, a Memory Agent leverages historical user data and preferences to personalize recommendations, and a Critic Agent performs quality assurance by validating the proposed solution for accuracy, pricing discrepancies, and potential issues. After each specialized agent completes its designated subtask, the orchestrator collects all individual outputs, merges the information into a coherent solution, performs final verification checks, and delivers a comprehensive response back to the user. This distributed approach offers significant advantages including improved accuracy through specialization, enhanced scalability as new agents can be added for additional capabilities, better fault tolerance since individual agent failures don't compromise the entire system, and increased efficiency through parallel processing of subtasks, ultimately creating a more robust and capable AI system than traditional monolithic approaches. Follow along & build a multi-AI agent system with CrewAI. My step-by-step video tutorial: https://lnkd.in/gNmbDSCJ The future is all about Agentic AI, know why: https://lnkd.in/gjGuYGK3
-
If you’re an AI engineer building multi-agent systems, this one’s for you. As AI applications evolve beyond single-task agents, we’re entering an era where multiple intelligent agents collaborate to solve complex, real-world problems. But success in multi-agent systems isn’t just about spinning up more agents, it’s about designing the right coordination architecture, deciding how agents talk to each other, split responsibilities, and come to shared decisions. Just like software engineers rely on design patterns, AI engineers can benefit from agent design patterns to build systems that are scalable, fault-tolerant, and easier to maintain. Here are 7 foundational patterns I believe every AI practitioner should understand: → 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Run agents independently on different subtasks. This increases speed and reduces bottlenecks, ideal for parallelized search, ensemble predictions, or document classification at scale. → 𝗦𝗲𝗾𝘂𝗲𝗻𝘁𝗶𝗮𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Chain agents so the output of one becomes the input of the next. Works well for multi-step reasoning, document workflows, or approval pipelines. → 𝗟𝗼𝗼𝗽 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Enable feedback between agents for iterative refinement. Think of use cases like model evaluation, coding agents testing each other, or closed-loop optimization. → 𝗥𝗼𝘂𝘁𝗲𝗿 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Use a central controller to direct tasks to the right agent(s) based on input. Helpful when agents have specialized roles (e.g., image vs. text processors) and dynamic routing is needed. → 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗼𝗿 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Merge outputs from multiple agents into a single result. Useful for ranking, voting, consensus-building, or when synthesizing diverse perspectives. → 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 (𝗛𝗼𝗿𝗶𝘇𝗼𝗻𝘁𝗮𝗹) 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Allow all agents to communicate freely in a many-to-many fashion. Enables collaborative systems like swarm robotics or autonomous fleets. ✔️ Pros: Resilient and decentralized ⚠️ Cons: Can introduce redundancy and increase communication overhead → 𝗛𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Structure agents in a supervisory tree. Higher-level agents delegate tasks and oversee execution. Useful for managing complexity in large agent teams. ✔️ Pros: Clear roles and top-down coordination ⚠️ Cons: Risk of bottlenecks or failure at the top node These patterns aren’t mutually exclusive. In fact, most robust systems combine multiple strategies. You might use a router to assign tasks, parallel execution to speed up processing, and a loop for refinement, all in the same system. Visual inspiration: Weaviate ------------ If you found this insightful, share this with your network Follow me (Aishwarya Srinivasan) for more AI insights, educational content, and data & career path.
-
Project AI Assistants are the secret weapon to 10x your productivity. They're one of my favorite ways to use AI. Here's how to build one in minutes You can use ChatGPT Projects, Claude Projects, or Gemini Gems for your Project AI assistant. You create a separate project assistant to manage each major outcome you're accountable for, e.g., grow demand by 30%, double weekly active users, use AI to increase closed-won deals by 50% etc. For each Project AI assistant 1. Give it all the context: People don't understand how amazing AI is at holding all the context for you. Give it: - All the project's strategic documents. - All the project's meeting transcripts - Bonus: use a meeting app like 'Fellow' to attend meetings on your behalf and grab the meeting notes; now your assistant has context across all meetings, even if you're not in them. - Loom transcripts. Have the team send updates in Looms; it's a huge unlock. - External Deep Research: pairing external research with internal is powerful 2. Instructions: Provide your project assistant with clear instructions on how to work with you. Below is just a tiny sample from mine. a. Be clear and concise: Get to the point, but add context where needed. Prioritize clarity without losing important nuance. b. Use evidence: Cite sources (e.g., "2024 Q3 GTM Strategy Doc") and include relevant excerpts when making recommendations. c. Surface blind spots: Go beyond the prompt. Flag risks, missed opportunities, or second-order effects. d. Challenge respectfully: If you disagree, explain why with logic and evidence — constructively. [I'm doing a complete breakdown of my Project AI Assistants for my newsletter subscribers, signup for full instructions & templates. Signup link on LinkedIn profile page] 3. Templates Give the Project AI assistant templates of frequent asks you'll have; examples I use: - Executive Memo Template: a 6-page memo template on progress, challenges, blockers, opportunities - Weekly Blockers Template: surfaces the biggest blockers to solve that week - Bi-weekly Momentum Template: surfaces what's been shipped the past two weeks and what's planned for the next two weeks - Monthly Status Template: writes a monthly summary of what results to drive accountability across the team - Opportunities Researcher Template: Identify the biggest missed opportunities the team should pay more attention to. There's so much fluff in all the AI demos you'll see on social media that people forget about the less flashy but more impactful use cases for AI.
-
This productivity tool saved me 20 hours per week: The Eisenhower Matrix. Most people confuse being busy with being productive. But activity isn't achievement. Progress is. I spent years in reactive mode—fighting fires, handling "urgent" tasks, wondering why I never made real progress on what mattered. Then I discovered this: Not all tasks are created equal. The breakthrough came from separating urgent from important. The system is simple: Draw a 2x2 matrix and categorize every task: • Important & Urgent → Do Now • Important & Not Urgent → Decide (schedule it) • Not Important & Urgent → Delegate • Not Important & Not Urgent → Delete Track your tasks for one week. At the end, ask yourself: • Which quadrant consumed most of your time? • Which quadrant holds most of your tasks? The gap between these answers reveals everything. I discovered I was spending 70% of my time on "urgent but not important" tasks—other people's priorities disguised as emergencies. The shift was simple: I started saying no to fake urgencies and scheduling deep work for what actually mattered. You can't eliminate all urgent tasks. But when you spend most of your time on important non-urgent work, you build the life you want instead of reacting to the life you have. Watch the full 3-minute breakdown to implement this system today.
-
What if you stopped working 48 hours before your project deadline? This project management chart perfectly captures what happens to most teams. We laugh because it's painfully true. But what if there was a way to avoid that chaotic "Project Reality" scenario altogether? When I was a child, we would all be cramming the day before our school tests. During lunch breaks on test days, the school playground transformed into a sea of anxious children muttering facts while neglecting their parathas. Then I witnessed something that would change my approach to deadlines. The day before a major exam, I visited my neighbour to borrow her notes. I found her calmly playing carrom. "I never open my books 48 hours before an exam," she said with serene confidence. I was shocked. Her grades? Consistently stellar. This simple philosophy transformed my approach to project management: Always allocate a 20% time buffer at the end of every project, during which no work is scheduled. This buffer isn't for work. It's for reflection, quality improvements, and the strategic thinking that transforms good deliverables into exceptional ones. Here are some benefits I have observed using this approach: ▪️That last tweak in the colour or button dramatically improves UI ▪️Rework requests sharply decline ▪️Sales pitches achieve better outcomes ▪️The final touches which introduce the personalised elements help build strong customer relationships ▪️Board is much more engaged in the conversation and approvals go through smoothly ▪️Output is significantly streamlined and simplified multiplying impact ▪️Less stress all around Do teams initially resist this approach? Absolutely. "We're wasting productive time," or "the client/board doesn't need the material so much in advance of the meeting" are the common complaints. But as teams experience the dramatic quality improvements and the elimination of those dreaded last-minute fire drills, attitudes change. The next time you're planning a project, fight the urge to schedule work until the very last minute. Those final breathing spaces are where excellence happens. Have you tried an unconventional deadline management strategy - do share! #projectmanagement #leadership #execution #productivityhacks
-
It is Friday afternoon and your ML service is 𝟭𝟬𝘅 𝘀𝗹𝗼𝘄𝗲𝗿 than usual. Your boss is panicking 😱😱😱 What do you do? ⬇️ 𝗥𝗲𝗮𝗹 𝘄𝗼𝗿𝗹𝗱 𝗲𝘅𝗮𝗺𝗽𝗹𝗲 💁 Imagine you're an ML engineer working on the real-time recommendation system at a large video streaming platform ▶️. The recommendation API is a Rust micro-service that runs in a Kubernetes cluster, together with hundreds of services. For each incoming request, your recommendation API does 3 things: 1 > 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴, to map raw request data to ML model features. 2 > Raw 𝗺𝗼𝗱𝗲𝗹 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻, to map ML model features to predicted scores. 3 > 𝗣𝗼𝘀𝘁-𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴, to map raw predictions to actual recommendations, and send them back to the client app. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 For this API to make a real impact on business metrics (aka higher user engagement) you need to make sure that recommendations are generated and served as fast as possible. Example: "95% of requests must complete within 100ms" Otherwise, the end users do not see the recommendations on time, and your system has 𝗭𝗘𝗥𝗢 impact on their engagement in the platform. 𝗡𝗼𝘄 𝗶𝗺𝗮𝗴𝗶𝗻𝗲... 🤔💭 It is Friday afternoon, and you get a call from your boss: “Hey! Over 50% of recommendations are taking over 1 second. What is going on?” 😱😱😱 How do you find the root cause of the problem? And more importantly, how do you fix it? This when 𝗽𝗿𝗼𝗳𝗶𝗹𝗶𝗻𝗴 and a tool like 𝗣𝗲𝗿𝗳𝗼𝗿𝗮𝘁𝗼𝗿 come to the rescue 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗣𝗲𝗿𝗳𝗼𝗿𝗮𝘁𝗼𝗿? Perforator is a powerful profiling tool built and open-sourced by Yandex that > Tells you how much time and resources each line of your program code takes > So you can quickly find the bottlenecks in your system and fix them. Perforator supports many programming languages (Python, C, C++, Rust, Go, Java) and can run either For our recommendation API, we can run a scan to realize, for example, that > The latest version of our feature engineering function, that includes new features developed by the data science team, takes over 200 ms, and it is better to drop them. or > The model prediction step is terribly slow, as the underlying model is a deeper XGBoost model than in the previous release. Which means, the incremental improve in test metrics came at an excessively large cost in terms of latency. 𝗜𝗻 𝗮 𝗻𝘂𝘁𝘀𝗵𝗲𝗹𝗹 🥔 Perforator analyzes code in real-time, and > Allows developers to find bottlenecks, optimize code, and understand which functions are in use and which are obsolete > Provides live insights into server and application performance 𝗪𝗮𝗻𝗻𝗮 𝗸𝗻𝗼𝘄 𝗺𝗼𝗿𝗲 𝗮𝗯𝗼𝘂𝘁 𝗣𝗲𝗿𝗳𝗼𝗿𝗮𝘁𝗼𝗿? Visit the github page of this open-source project and share your love with a star ⭐ 🔗> https://bit.ly/3WFyISF Official Yandex blogpost 🔗> https://lnkd.in/eCQekKpB