
Englisch
Gratis en Podimo
Mehr M365 Show Podcast
Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.
The Security Intern Is Now A Terminator
Opening: “The Security Intern Is Now A Terminator”Meet your new intern. Doesn’t sleep, doesn’t complain, doesn’t spill coffee into the server rack, and just casually replaced half your Security Operations Center’s workload in a week.This intern isn’t a person, of course. It’s a synthetic analyst—an autonomous agent from Microsoft’s Security Copilot ecosystem—and it never asks for a day off.If you’ve worked in a SOC, you already know the story. Humans drowning in noise. Every endpoint pings, every user sneeze triggers a log—most of it false, all of it demanding review. Meanwhile, every real attack is buried under a landfill of “possible events.”That’s not vigilance. That’s punishment disguised as productivity.Microsoft decided to automate the punishment. Enter Security Copilot agents: miniature digital twins of your best analysts, purpose-built to think in context, make decisions autonomously, and—this is the unnerving part—improve as you correct them.They’re not scripts. They’re coworkers. Coworkers with synthetic patience and the ability to read a thousand alerts per second without blinking.We’re about to meet three of these new hires.Agent One hunts phishing emails—no more analyst marathons through overflowing inboxes.Agent Two handles conditional access chaos—rewriting identity policy before your auditors even notice a gap.Agent Three patches vulnerabilities—quietly prepping deployments while humans argue about severity.Together, they form a kind of robotic operations team: one scanning your messages, one guarding your doors, one applying digital bandages to infected systems.And like any overeager intern, they’re learning frighteningly fast.Humans made them to help. But in teaching them how we secure systems, we also taught them how to think about defense. That’s why, by the end of this video, you’ll see how these agents compress SOC chaos into something manageable—and maybe a little unsettling.The question isn’t whether they’ll lighten your workload. They already have.The question is how long before you report to them.Section 1: The Era of Synthetic AnalystsSecurity Operations Centers didn’t fail because analysts were lazy. They failed because complexity outgrew the species.Every modern enterprise floods its SOC with millions of events daily. Each event demands attention, but only a handful actually matter—and picking out those few is like performing CPR on a haystack hoping one straw coughs.Manual triage worked when logs fit on one monitor. Then came cloud sprawl, hybrid identities, and a tsunami of false positives. Analysts burned out. Response times stretched from hours to days. SOCs became reaction machines—collecting noise faster than they could act.Traditional automation was supposed to fix that. Spoiler: it didn’t.Those old-school scripts are calculators—they follow formulas but never ask why. They trigger the same playbook every time, no matter the context. Useful, yes, but rigid.Agentic AI—what drives Security Copilot’s new era—is different. Think of it like this: the calculator just does math; the intern with intuition decides which math to do.Copilot agents perceive patterns, reason across data, and act autonomously within your policies. They don’t just execute orders—they interpret intent. You give them the goal, and they plan the steps.Why this matters: analysts spend roughly seventy percent of their time proving alerts aren’t threats. That’s seven of every ten work hours verifying ghosts. Security Copilot’s autonomous agents eliminate around ninety percent of that busywork by filtering false alarms before a human ever looks.An agent doesn’t tire after the first hundred alerts. It doesn’t degrade in judgment by hour twelve. It doesn’t miss lunch because it never needed one.And here’s where it gets deviously efficient: feedback loops. You correct the agent once—it remembers forever. No retraining cycles, no repeated briefings. Feed it one “this alert was benign,” and it rewires its reasoning for next time. One human correction scales into permanent institutional memory.Now multiply that memory across Defender, Purview, Entra, and Intune—the entire Microsoft security suite sprouting tiny autonomous specialists.Defender’s agents investigate phishing. Purview’s handle insider risk. Entra’s audit access policies in real time. Intune’s remediate vulnerabilities before they’re on your radar. The architecture is like a nervous system: signals from every limb, reflexes firing instantly, brain centralized in Copilot.The irony? SOCs once hired armies of analysts to handle alert volume; now they deploy agents to supervise those same analysts.Humans went from defining rules, to approving scripts, to mentoring AI interns that no longer need constant guidance.Everything changed at the moment machine reasoning became context-aware. In rule-based automation, context kills the system—too many branches, too much logic maintenance. In agentic AI, context feeds the system—it adapts paths on the fly.And yes, that means the agent learns faster than the average human. Correction number one hundred sticks just as firmly as correction number one. Unlike Steve from night shift, it doesn’t forget by Monday.The result is a SOC that shifts from reaction to anticipation. Humans stop firefighting and start overseeing strategy. Alerts get resolved while you’re still sipping coffee, and investigations run on loop even after your shift ends.The cost? Some pride. Analysts must adapt to supervising intelligence that doesn’t burn out, complain, or misinterpret policies. The benefit? A twenty-four–hour defense grid that gets smarter every time you tell it what it missed.So yes, the security intern evolved. It stopped fetching logs and started demanding datasets.Let’s meet the first one.It doesn’t check your email—it interrogates it.Section 2: Phishing Triage Agent — Killing Alert FatigueEvery SOC has the same morning ritual: open the queue, see hundreds of “suspicious email” alerts, sigh deeply, and start playing cyber roulette. Ninety of those reports will be harmless newsletters or holiday discounts. Five might be genuine phishing attempts. The other five—best case—are your coworkers forwarding memes to the security inbox.Human analysts slog through these one by one, cross-referencing headers, scanning URLs, validating sender reputation. It’s exhausting, repetitive, and utterly unsustainable. The human brain wasn’t designed to digest thousands of nearly identical panic messages per day. Alert fatigue isn’t a metaphor; it’s an occupational hazard.Enter the Phishing Triage Agent. Instead of being passively “sent” reports, this agent interrogates every email as if it were the world’s most meticulous detective. It parses the message, checks linked domains, evaluates sender behavior, and correlates with real‑time threat signals from Defender. Then it decides—on its own—whether the email deserves escalation.Here’s the twist. The agent doesn’t just apply rules; it reasons in context. If a vendor suddenly sends an invoice from an unusual domain, older systems would flag it automatically. Security Copilot’s agent, however, weighs recent correspondence patterns, authentication results, and content tone before concluding. It’s the difference between “seems odd” and “is definitely malicious.”Consider a tiny experiment. A human analyst gets two alerts: “Subject line contains ‘payment pending.’” One email comes from a regular partner; the other from a domain off by one letter. The analyst will investigate both—painstakingly. The agent, meanwhile, handles them simultaneously, runs telemetry checks, spots the domain spoof, closes the safe one, escalates the threat, and drafts its rationale—all before the human finishes reading the first header.This is where natural language feedback changes everything. When an analyst intervenes—typing, “This is harmless”—the agent absorbs that correction. It re‑prioritizes similar alerts automatically next time. The learning isn’t generalized guesswork; it’s specific reasoning tuned to your environment. You’re building collective memory, one dismissal at a time.Transparency matters, of course. No black‑box verdicts. The agent generates a visual workflow showing each reasoning step: DNS lookups, header anomalies, reputation scores, even its decision confidence. Analysts can reenact its thinking like a replay. It’s accountability by design.And the results? Early deployments show up to ninety percent fewer manual investigations for phishing alerts, with mean‑time‑to‑validate dropping from hours to minutes. Analysts spend more time on genuine incidents instead of debating whether “quarterly update.pdf” is planning a heist. Productivity metrics improve not because people work harder, but because they finally stop wasting effort proving the sky isn’t falling.Psychologically, that’s a big deal. Alert fatigue doesn’t just waste time—it corrodes morale. Removing the noise restores focus. Analysts actually feel competent again rather than chronically overwhelmed. The Phishing Triage Agent becomes the calm, sleepless colleague quietly cleaning the inbox chaos before anyone logs in.Basically, this intern reads ten thousand emails a day and never asks for coffee. It doesn’t glance at memes, doesn’t misjudge sarcasm, and doesn’t forward chain letters to the CFO “just in case.” It just works—relentlessly, consistently, boringly well.Behind the sarcasm hides a fundamental shift. Detection isn’t about endless human vigilance anymore; it’s about teaching a machine to approximate your vigilance, refine it, then exceed it. Every correction you make today becomes institutional wisdom tomorrow. Every decision compounds.So your inbox stays clean, your analysts stay sane, and your genuine threats finally get their moment of undivided attention.And if this intern handles your inbox, the next one manages your doors.Section 3: Conditional Access Optimization Agent — Closing Access GapsIdentity management: the digital equivalent of herdi Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support [https://www.spreaker.com/podcast/m365-show-podcast--6704921/support?utm_source=rss&utm_medium=rss&utm_campaign=rss]. Follow us on: LInkedIn [https://www.linkedin.com/school/m365-show/] Substack [https://m365.show/]
5 Power Automate Hacks That Unlock Copilot ROI
Opening – Hook + Teaching PromiseYou think Copilot does the work by itself? Fascinating. You deploy an AI assistant and then leave it unsupervised like a toddler near a power socket. And then you complain that it doesn’t deliver ROI. Of course it doesn’t. You handed it a keyboard and no arms.Here’s the inconvenient truth: Copilot saves moments, not money. It can summarize a meeting, draft a reply, or suggest a next step, but those micro‑wins live and die in isolation. Without automation, each one is just a scattered spark—warm for a second, useless at scale. Organizations install AI thinking they bought productivity. What they bought was potential, wrapped in marketing.Now enter Power Automate: the hidden accelerator Microsoft built for people who understand that potential only matters when it’s executed. Copilot talks; Power Automate moves. Together, they create systems where a suggestion instantly becomes an action—documented, auditable, and repeatable. That’s the difference between “it helped me” and “it changed my quarterly numbers.”So here’s what we’ll dissect. Five Power Automate hacks that weaponize Copilot:Custom Connectors—so AI sees past its sandbox.Adaptive Cards—to act instantly where users already are.DLP Enforcement—to keep the brilliant chaos from leaking data.Parallelism—for the scale Copilot predicts but can’t handle alone.And Telemetry Integration—because executives adore metrics more than hypotheses.By the end, you’ll know how to convert chat into measurable automation—governed, scalable, and tracked down to the millisecond. Think of it as teaching your AI intern to actually do the job, ethically and efficiently. Now, let’s start by giving it eyesight.1. Custom Connectors – Giving Copilot Real ContextCopilot’s biggest limitation isn’t intelligence; it’s blindness. It can only automate what it can see. And the out‑of‑box connectors—SharePoint, Outlook, Teams—are a comfortable cage. Useful, predictable, but completely unaware of your ERP, your legacy CRM, or that beautifully ugly database written by an intern in 2012.Without context, Copilot guesses. Ask for a client credit check and it rummages through Excel like a confused raccoon. Enter Custom Connectors—the prosthetic vision you attach to your AI so it stops guessing and starts knowing.Let’s clarify what they are. A Custom Connector is a secure bridge between Power Automate and anything that speaks REST. You describe the endpoints—using an OpenAPI specification or even a Postman collection—and Power Automate treats that external service as if it were native. The elegance is boringly technical: define authentication, map actions, publish into your environment. The impact is enormous: Copilot can now reach data it was forbidden to touch before.The usual workflow looks like this. You document your service endpoints—getClientCreditScore, updateInvoiceStatus, fetchInventoryLevels. Then you define security through Azure Active Directory so every call respects tenant authentication. Once registered, the connector appears inside Power Automate like any of the standard ones. Copilot, working through Copilot Studio or through a prompt in Teams, can now trigger flows using those endpoints. It transforms from a sentence generator into a workflow conductor.Picture this configuration in practice. Copilot receives a prompt in Teams: “Check if Contoso’s account is eligible for extended credit.” Instead of reading a stale spreadsheet, it triggers your flow built on the Custom Connector. That flow queries an internal SQL database, applies your actual business rules, and posts the verified status back into Teams—instantly. No manual lookups, no “hold on while I find that.” The AI didn’t just talk. It acted, with authority.Why it matters is stunningly simple. Every business complains that Copilot can’t access “our real data.” That’s by design—security before functionality. Custom Connectors flip that equation safely. You expose exactly what’s needed—no more, no less—sealed behind tenant-level authentication. Suddenly Copilot’s suggestions are grounded in truth, not hallucination.Here’s the takeaway principle: automation without awareness is randomization. Custom Connectors make aware automation possible.Now, the trap most admins fall into—hardcoding credentials. They create a proof of concept using a personal service account token, then accidentally ship it into production. Congratulations, you just built a time bomb that expires quietly and takes half your flows down at midnight. Always rely on Azure AD OAuth flows or managed identity authentication. Policies first, convenience later.Another overlooked detail: API definitions. Document them properly. Outdated schema or response parameters cause silent failures that look like Copilot indecision but are actually malformed contracts. Validation isn’t optional; it’s governance disguised as sanity.Let’s run through a miniature build to demystify it. Start in Power Automate. Under Data, choose Custom Connectors, then “New from OpenAPI file.” Import your specification. Define authentication as Azure AD and specify resource URLs. Next, run the test operation—if “200 OK” appears, you’ve just taught Power Automate a new vocabulary word. Save, publish, and now that connector becomes available inside flow designer and Copilot Studio.From Copilot’s perspective, it’s now fluent in your internal language. When a user in Copilot Studio crafts a skill like “get customer risk level,” it calls the connector transparently. The AI doesn’t care that data lived behind a firewall; you engineered the tunnel.This is where ROI begins. You’ve eliminated a manual query that might take a financial analyst five minutes each time. Multiply that across hundreds of requests per week, and you’ve translated Copilot’s ideas into measurable time reduction. Automation scales the insight. That’s ROI with receipts.One small refinement: always register these connectors at the environment or solution level, not per user. Otherwise you create a nightmare of duplicated connectors, inconsistent authentication, and no centralized management. Environment registration ensures compliance, versioning, and shared governance—all required if you plan to connect this into DLP later.For extra finesse, document connector capabilities in Dataverse tables so Copilot can self-describe its options. When someone asks, “What can you automate for procurement?” the AI can query those metadata entries and answer intelligently: “I can access inventory levels, purchase orders, and vendor risk data.” Congratulations, your AI now reads its own documentation.The reason this method delivers ROI isn’t mystical—it’s mechanical. Every second Copilot saves must survive transfer into workflow. Out‑of‑box connectors plateau fast. Custom Connectors punch through that ceiling by bridging the blind spots of your enterprise.Now that Copilot can see—securely and contextually—let’s make it act where people actually live: inside the apps they stare at all day.2. Adaptive Cards – Turning Suggestions into Instant ActionsCopilot’s words are smart; your users, less so when they copy‑paste them into other apps to actually do something. The typical pattern is tragicomic: Copilot summarizes a project risk, the team nods, then opens five different tools just to fix one item. That’s not automation. That’s a relay race with extra paperwork.Adaptive Cards repair that human bottleneck by planting the “Act” button directly where people already are—Teams, Outlook, or even Loop. They convert ideas into executable objects. Instead of saying “you should approve this,” Copilot can post a card that is the approval form. You press a button; Power Automate does the rest.Here’s why this matters: attention span. Every time a user switches context, they incur friction—those few seconds of mental reboot that destroy your supposed AI productivity gains. Adaptive Cards eliminate the jump. They let Copilot hand users an action inline, maintaining thread continuity and measurable velocity.So what are they, technically? Structured JSON wrapped in elegance. Each card defines containers, text blocks, inputs, and actions. Power Automate uses the “Post Adaptive Card and Wait for a Response” or the modern “Send Adaptive Card to Teams” action to push them into chat. When a recipient clicks a button—Approve, Escalate, Comment—the response event triggers your next flow stage. No tab‑hopping, no missing links, no “I’ll do it later.”Implementation sounds scarier than it is. Start inside Power Automate. Build your Copilot prompt logic—say, after Copilot drafts a meeting summary identifying overdue tasks. Add the Post Adaptive Card action. Design the card JSON: a title (“Overdue Tasks”), a descriptive text block listing items, and buttons bound to dynamic fields derived from Copilot’s output. When someone selects “Mark Complete,” it triggers another flow that updates Planner or your internal ticket system.Now, you’ve transformed a suggestion into a closed feedback loop. Copilot reads conversation context, surfaces an action card, users respond in‑place, and the workflow executes—all without leaving the chat thread. That seamlessness is what converts novelty into ROI.A proper design principle here: the card shouldn’t require explanation. If you have to post instructions next to it, you’ve failed the design review. Use icons, concise labels, and dynamic previews—Copilot can populate summaries like “Task: Update client pitch deck – Due in 2 days.” People click; Power Automate handles the rest. You measure completion time, not comprehension time.And yes, they work beyond Teams. In Outlook, Adaptive Cards appear inline in email—perfect for scenarios like approval requests, time‑off confirmation, or budget sign‑off. The same card schema carries across hosts, meaning you design once, reuse anywhere. It’s UI unification without the overhead of a full app.Typical pitfall? Schema sloppiness. Cards with missing version headers or malformed bi Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support [https://www.spreaker.com/podcast/m365-show-podcast--6704921/support?utm_source=rss&utm_medium=rss&utm_campaign=rss]. Follow us on: LInkedIn [https://www.linkedin.com/school/m365-show/] Substack [https://m365.show/]
Master Power Platform AI: The 4 New Tools Changing Everything
Opening: The Problem with “Future You”Most Power Platform users still believe “AI” means Copilot writing formulas. That’s adorable—like thinking electricity is only good for lighting candles faster. The reality is Microsoft has quietly launched four tools that don’t just assist you—they redefine what “building” even means. Dataverse Prompt Columns, Form Filler, Generative Pages, and Copilot Agents—they’re less “new features” and more tectonic shifts. Ignore them, and future you becomes the office relic explaining manual flows in a world that’s already self‑automating.Here’s the nightmare: while you’re still wiring up Power Fx and writing arcane validation logic, someone else is prompting Dataverse to generate data intelligence on the fly. Their prototypes build themselves. Their bots delegate tasks like competent employees. And your “manual app” will look like a museum exhibit. Let’s dissect each of these tools before future you starts sending angry emails to present you for ignoring the warning signs.Section 1: Dataverse Prompt Columns — The Dataset That ThinksStatic columns are the rotary phones of enterprise data. They sit there, waiting for you to tell them what to do, incapable of nuance or context. In 2025, that’s not just inefficient—it’s embarrassing. Enter Dataverse Prompt Columns: the first dataset fields that can literally interpret themselves. Instead of formula logic written in Power Fx, you hand the column a natural‑language instruction, and it uses the same large language model behind Copilot to decide what the output should be. The column itself becomes the reasoning engine.Think about it. A traditional calculated column multiplies or concatenates values. A Prompt Column writes logic. You don’t code it—you explain intent. For example, you might tell it, “Generate a Teams welcome message introducing the new employee using their name, hire date, and favorite color.” Behind the scenes, the AI synthesizes that instruction, references the record data, and outputs human‑level text—or even numerical validation flags—whenever that record updates. It’s programmatically creative.Why does this matter? Because data no longer has to be static or dumb. Prompt Columns create a middle ground between automation and cognition. They interpret patterns, run context‑sensitive checks, or compose outputs that previously required entire Power Automate flows. Less infrastructure, fewer breakpoints, more intelligence at the source. You can have a table that validates record accuracy, styles notifications differently depending on a user’s role, or flags suspicious entries with a Boolean confidence score—all without writing branching logic.Compare that to the Power Fx era, where everything was brittle. One change in schema and your formula chain collapsed like bad dentistry. Prompt logic is resistant to those micro‑fractures because it’s describing intention, not procedure. You’re saying “Summarize this record like a human peer would,” and the AI handles the complexity—referencing multiple columns, pulling context from relationships, even balancing tone depending on the field content. Fewer explicit rules, but far better compliance with the outcome you actually wanted.The truth? It’s the same language interface you’ll soon see everywhere in Microsoft’s ecosystem—Power Apps, Power Automate, Copilot Studio. Learn once, deploy anywhere. That makes Dataverse Prompt Columns the best training field for mastering prompt engineering inside the Microsoft stack. You’re not just defining formulas; you’re shaping reasoning trees inside your database.Here’s a simple scenario. You manage a table of new hires. Each record contains name, department, hire date, and favorite color. Create a Prompt Column that instructs: “Draft a friendly Teams post introducing the new employee by name, mention their department, and include a fun comment related to their favorite color.” When a record is added, the column generates the entire text: “Please welcome Ashley from Finance, whose favorite color—green—matches our hopes for this quarter’s budget.” That text arrives neatly structured, saved, and reusable across flows or notifications. No need for multistep automation. The table literally communicates.Now multiply that by every table in your organization. Product descriptions that rewrite themselves. Quality checks that intelligently evaluate anomalies. Compliance fields that explain logic before escalation. You start realizing: this isn’t about AI writing content; it’s about data evolving from static storage to active reasoning.Of course, the power tempts misuse. One common mistake is treating Prompt Columns like glorified formulas—stuffing them with pseudo‑code. That suffocates their value. Another misstep: skipping context tokens. You can reference other fields in the prompt (slash commands expose them), and if you omit them, the model works blind. Context is the oxygen of good prompts; specify everything you need it to know about that record. Finally, over‑fitting logic—asking it to do ten unrelated tasks—creates noise. It’s a conversational model, not an Excel wizard trapped in a cell. Keep each prompt narrow, purposeful, and auditable.From a return‑on‑investment standpoint, this feature quietly collapses your tech debt. Fewer flows running means less latency and fewer points of failure. Instead of maintaining endless calculated expressions, your Dataverse schema becomes simpler: everything smart happens inside adaptable prompts. And because the same prompt engine spans Dataverse, Power Automate, and Copilot Studio, your learning scales across every product. Master once, profit everywhere.Let’s talk about strategic awareness. Prompt Columns are Microsoft’s sneak preview of how all data services are evolving—toward semantic control layers rather than procedural logic. Over the next few years, expect this unified prompt interface to appear across Excel formulas, Loop components, and even SharePoint metadata. When that happens, knowing how to phrase intent will be as essential as knowing DAX once was. The syntax changes from code to conversation.So if you haven’t already, start experimenting. Spin up a developer environment—no excuses about licensing. Create a table, add a Prompt Column, instruct it to describe or flag something meaningful, and test its variations. You’re not just learning a feature; you’re rehearsing the next generation of application logic. Once your columns can think, your forms can fill themselves—literally.Section 2: AI Form Filler — Goodbye, Manual Data EntryLet’s talk about the least glamorous task in enterprise software—data entry. For decades, organizations have built million‑dollar systems just to watch human beings copy‑paste metadata like slightly more expensive monkeys. The spreadsheet era never truly ended; it mutated inside web forms. Humans type inconsistently, skip fields, misread dates, and introduce small, statistically inevitable errors that destroy analytics downstream. The problem isn’t just tedium—it’s entropy disguised as work.Enter Form Filler, Microsoft’s machine‑taught intern hiding inside model‑driven apps. Officially it’s called “Form Assist,” which sounds politely boring, but what it actually does is parse unstructured or semi‑structured data—like an email, a chat transcript, or even a screenshot—and populate Dataverse fields automatically. You paste. It interprets. It builds the record for you. The days of alt‑tabbing between Outlook and form fields are, mercifully, numbered.Here’s how it works. You open a model‑driven form, click the “smart paste” or Form Assist option, and dump in whatever text or image contains the data. Maybe it’s a hiring email announcing Jennifer’s start date or a PDF purchase order living its best life as a scanned bitmap. The tool extracts entities—names, departments, dates, amounts—and matches them to schema fields. It even infers relationships between values when explicit labels are missing. The result populates instantly, but it doesn’t auto‑save until you confirm, giving you a sanity‑check stage called “Accept Suggestions.” Translation: AI fills it, but you stay accountable.The technology behind it borrows from the same large‑language‑model reasoning that powers Copilot chat, but here it’s surgically focused. It isn’t just making text; it’s identifying structured data inside chaos. Imagine feeding it a screen capture of an invoice—vendor, total, due date—in one paste operation. The model recognizes the shapes, text, and context, not pixel by pixel but semantically. This isn’t OCR; it’s comprehension with context weightings. That’s why it outperforms legacy extraction tools that depend on templates.Now, before you start dreaming of zero‑click data entry utopia, let’s be precise. Lookup fields? Not yet. Image attachments? Sometimes. Complex multi‑record relationships? Patience, grasshopper. The system still needs deterministic bindings for certain data types; it’s a cautious AI, not a reckless one. But the return on effort is still enormous—Form Filler already removes seventy to eighty percent of manual form work in typical scenarios. That’s not a gimmick; that’s a measurable workload collapse. Administrative teams recapture hours per user per week, and because humans aren’t rushing, input accuracy skyrockets.Skeptics will say, “It misses a few fields; it’s still in preview.” Correct—and irrelevant. AI doesn’t need to be perfect to be profitable; it just needs to out‑perform your interns. And it does. The delightful irony is that the more you use it, the better your staff learns prompt‑quality thinking: how to structure textual data for machine interpretation. Every paste becomes a quiet training session in usable syntax. Gradually, your team evolves from passive typists to semi‑prompt engineers, feeding structured cues rather than raw noise. That cultural upgrade is priceless.Let’s look at a tangible use case. Picture your HR coordinator onboarding new employees. Each week Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support [https://www.spreaker.com/podcast/m365-show-podcast--6704921/support?utm_source=rss&utm_medium=rss&utm_campaign=rss]. Follow us on: LInkedIn [https://www.linkedin.com/school/m365-show/] Substack [https://m365.show/]
Master AD to Entra ID Migration: Troubleshooting Made Easy
Opening: The Dual Directory DilemmaManaging two identity systems in 2025 is like maintaining both a smartphone and a rotary phone—one’s alive, flexible, and evolving; the other’s a museum exhibit you refuse to recycle. Active Directory still sits in your server room, humming along like it’s 2003. Meanwhile, Microsoft Entra ID is already running the global authentication marathon, integrating AI-based threat signals and passwordless access. And yet, you’re letting them both exist—side by side, bickering over who owns a username.That’s hybrid identity: twice the management, double the policies, and endless synchronization drift. Your on-premises AD enforces outdated password policies, while Entra ID insists on modern MFA. Somewhere between those two worlds, a user gets locked out, a Conditional Access rule fails, or an app denies authorization. The culprit? Dual Sources of Authority—where identity attributes are governed both locally and in the cloud, never perfectly aligned.What’s at stake here isn’t just neatness; it’s operational integrity. Outdated Source of Authority setups cause sync failures, mismatched user permissions, and those delightful “why can’t I log in” tickets.The fix is surprisingly clean: shifting the Source of Authority—groups first, users next—from AD to Entra ID. Do it properly, and you maintain access, enhance visibility, and finally retire the concept of manual user provisioning. But skip one small hidden property flag, and authentication collapses mid-migration. We’ll fix that, one step at a time.Section 1: Understanding the Source of AuthorityLet’s start with ownership—specifically, who gets to claim authorship over your users and groups. In directory terms, the Source of Authority determines which system has final say over an object’s identity attributes. Think of it as the “parental rights” of your digital personas. If Active Directory is still listed as the authority, Entra ID merely receives replicated data. If Entra ID becomes the authority, it stops waiting for its aging cousin on-prem to send updates and starts managing directly in the cloud.Why does this matter? Because dual control obliterates the core of Zero Trust. You can’t verify or enforce policies consistently when one side of your environment uses legacy NTLM rules and the other requires FIDO2 authentication. Audit trails fracture, compliance drifts, and privilege reviews become detective work. Running two authoritative systems is like maintaining two versions of reality—you’ll never be entirely sure who a user truly is at any given moment.Hybrid sync models were designed as a bridge, not a forever home. Entra Connect or its lighter sibling, Cloud Sync, plays courier between your directories. It synchronizes object relationships—usernames, group memberships, password hashes—ensuring both directories recognize the same entities. But this arrangement has one catch: only one side can write authoritative changes. The moment you try to modify cloud attributes for an on-premises–managed object, Entra ID politely declines with a “read-only” shrug.Now enter the property that changes everything: IsCloudManaged. When set to true for a group or user, it flips the relationship. That object’s attributes, membership, and lifecycle become governed by Microsoft Entra ID. The directory that once acted as a fossil record—slow, static, limited by physical infrastructure—is replaced by a living genome that adapts in real time. Active Directory stores heritage. Entra ID manages evolution.This shift isn’t theoretical. When a group becomes cloud-managed, you can leverage capabilities AD could never dream of: Conditional Access, Just-In-Time assignments, access reviews, and MFA enforcement—controlled centrally and instantly. Security groups grow and adjust via Graph APIs or PowerShell with modern governance baked in.Think of the registry in AD as written in stone tablets. Entra ID, on the other hand, is editable DNA—continuously rewriting itself to keep your identities healthy. Refusing to move ownership simply means clinging to an outdated biology.Of course, there’s sequencing to respect. You can’t just flip every object to cloud management and hope for the best. You start by understanding the genetic map—who depends on whom, which line-of-business applications authenticate through those security groups, and how device trust chains back to identity. Once ownership is clarified, migration becomes logical prioritization.If the Source of Authority defines origin, then migration defines destiny. And now that you understand who’s really in charge of your identities, the next move is preparing your environment to safely hand off that control.Section 2: Preparing Your Environment for MigrationBefore you can promote Entra ID to full sovereignty, you need to clean the kingdom. Most admins skip this step, then act surprised when half the objects refuse to synchronize or a service account evaporates. Preparation isn’t glamorous, but it’s the difference between a migration and a mess.Start with a full census. Identify every group and user object that still flows through Entra Connect. Check the sync scope, the connected OUs, and whether any outdated filters are blocking objects that should exist in the cloud. You’d be shocked how many organizations find entire departments missing from Entra simply because someone unchecked an OU five years ago. The point is visibility: you can’t transfer authority over what you can’t see.Once you know who and what exists, begin cleansing your data. Active Directory is riddled with ghosts—stale accounts, old service principals, duplicate UPNs. Clean them out. Duplicate User Principal Names in particular will block promotion, because two clouds can’t claim the same sky. Remove or rename collisions before proceeding. While you’re at it, reconcile any irregular attributes—misaligned display names, strange proxy addresses, and non‑standard primary emails. These details matter. When you flip an object to cloud management, Entra will treat that data as canonical truth. Garbage in becomes garbage immortalized.Then confirm your synchronization channels are healthy. Open the Entra Connect Health dashboard and verify that both import and export cycles complete without errors. If you’re still using legacy Azure AD Connect, ensure you’re on a supported version; Microsoft quietly depreciates old build chains, and surprises you with patch incompatibilities. Schedule a manual sync run and watch the logs. No warnings should remain, only reassuring green checks.Next, document. Every attribute mapping, extension schema, and custom rule you currently rely on should be recorded. Yes, you think you’ll remember how everything ties together, but the moment an account stops syncing, your brain will purge that knowledge like cache data. Write it down. Consider exporting complete connector configurations if you’re using Entra Connect. Backup your scripts. Because when you migrate the Source of Authority, rollback isn’t a convenient button—it’s a resurrection ritual.Security groundwork comes next. There’s no point modernizing your directory if you still allow weak authentication. Enforce modern MFA before migration: FIDO2 keys, authenticator‑based login, conditional policy requiring compliant devices. These become native once an object is cloud‑managed, but the infrastructure should already expect them. Test your Conditional Access templates—specifically, whether newly cloud‑managed entities fall under expected controls. A mismatch here can lock out administrators faster than you can type “support ticket.”Then design your migration sequence. A sensible order keeps systems breathing while you swap their spine. Start with groups rather than user accounts because memberships reveal dependency chains. Prioritize critical application groups—anything gating finance, HR, or secure infrastructure. Those groups govern app policy; by moving them first, you prepare the environment for users without breaking authentication. After those, pick pilot groups of ordinary office users. Watch how they behave once their Source of Authority becomes Entra ID. Confirm they can still access on‑premises resources through hybrid trust. Iterate, fix, and expand. Leave high‑risk or complex cross‑domain users for last.One final precaution: ensure Kerberos and certificate trust arrangements on‑prem can still recognize cloud‑managed identities. That means having modern authentication connectors installed and fully patched. When you move objects, they no longer inherit updates from AD; instead, Entra drives replication down to the local environment via SID matching. If your trust boundary is brittle, you’ll lose seamless access.At this point, your environment isn’t just clean—it’s primed. You’ve audited, patched, and verified every relationship that could fail you mid‑migration. And since clean directories never stay clean, remember this: future migrations begin the moment you finish the previous one. Preparation is perpetual. Once those boxes are ticked, you’re ready to move from architecture to action, beginning where it’s safest—the groups.Section 3: Migrating Groups to Cloud ManagementGroups are the connective tissue of identity. They hold permissions, drive access, and define what any given user can touch. Move them wrong, and you’ll break both the skeleton and the nervous system of your environment. But migrate them systematically, and the transition is almost anticlimactic.Start by identifying which groups should make the leap first. The ones tied to key applications are prime candidates—particularly security groups controlling production systems, SharePoint permissions, or line‑of‑business apps. Find them in Entra Admin Center and note their Object IDs. Each object’s ID is its passport for any Graph or PowerShell command. Checking the details page will also show whether it currently displays “Source: Windows Server Active Directory.” That phrase means the group is still Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support [https://www.spreaker.com/podcast/m365-show-podcast--6704921/support?utm_source=rss&utm_medium=rss&utm_campaign=rss]. Follow us on: LInkedIn [https://www.linkedin.com/school/m365-show/] Substack [https://m365.show/]
Control My Power App with Copilot Studio
Opening: “The AI Agent That Runs Your Power App”Most people still think Copilot writes emails and hallucinates budget summaries. Wrong. The latest update gives it opposable thumbs. Copilot Studio can now physically use your computer—clicking, typing, dragging, and opening apps like a suspiciously obedient intern. Yes, Microsoft finally taught the cloud to reach through the monitor and press buttons for you.And that’s not hyperbole. The feature is literally called “Computer Use.” It lets a Copilot agent act inside a real Windows session, not a simulated one. No more hiding behind connectors and APIs; this is direct contact with your desktop. It can launch your Power App, fill fields, and even submit forms—all autonomously. Once you stop panicking, you’ll realize what that means: automation that transcends the cloud sandbox and touches your real-world workflows.Why does this matter? Because businesses run on a tangled web of “almost integrated” systems. APIs don’t always exist. Legacy UIs don’t expose logic. Computer Use moves the AI from talking about work to doing the work—literally moving the cursor across the screen. It’s slow. It’s occasionally clumsy. But it’s historic. For the first time, Office AI interacts with software the way humans do—with eyes, fingers, and stubborn determination.Here’s what we’ll cover: setting it up without accidental combustion, watching the AI fumble through real navigation, dissecting how the reasoning engine behaves, then tackling the awkward reality of governance. By the end, you’ll either fear for your job or upgrade your job title to “AI wrangler.” Both are progress.Section 1: What “Computer Use” Really MeansLet’s clarify what this actually is before you overestimate it. “Computer Use” inside Copilot Studio is a new action that lets your agent operate a physical or virtual Windows machine through synthetic mouse and keyboard input. Imagine an intern staring at the screen, recognizing the Start menu, moving the pointer, and typing commands—but powered by a large language model that interprets each pixel in real time. That’s not a metaphor. It literally parses the interface using computer vision and decides its next move based on reasoning, not scripts.Compare that to a Power Automate flow or an API call. Those interact through defined connectors; predictable, controlled, and invisible. This feature abandons that polite formality. Instead, your AI actually “looks” at the UI like a user. It can misclick, pause to think, and recover from errors. Every run is different because the model reinterprets the visual state freshly each time. That unpredictability isn’t a bug—it’s adaptive problem solving. You said “open Power Apps and send an invite,” and it figures out which onscreen element accomplishes that, even if the layout changes.Microsoft calls this agentic AI—an autonomous reasoning agent capable of acting independently within a digital environment. It’s the same class of system that will soon drive cross-platform orchestration in Fabric or manage data flows autonomously. The shift is profound: instead of you guiding automation logic, you set intent, and the agent improvises the method.The beauty, of course, is backward compatibility with human nonsense. Legacy desktop apps, outdated intranet portals, anything unintegrated—all suddenly controllable again. The vision engine provides the bridge between modern AI language models and the messy GUIs of corporate history.But let’s be honest: giving your AI mechanical control requires more than enthusiasm. It needs permission, environment binding, and rigorous setup. Think of it like teaching a toddler to use power tools—possible, but supervision is mandatory. Understanding how Computer Use works under the hood prepares you for why the configuration feels bureaucratic. Because it is. The next part covers exactly that setup pain in excruciating, necessary detail so the only thing your agent breaks is boredom, not production servers.Section 2: Setting It Up Without Breaking ThingsAll right, you want Copilot to touch your machine. Brace yourself. This process feels less like granting autonomy and more like applying for a security clearance. But if you follow the rules precisely, the only thing that crashes will be your patience, not Windows.Step one—machine prerequisites. You need Windows 10 or 11 Pro or better. And before you ask: yes, “Home” editions are excluded. Because “Home” means not professional. Copilot refuses to inhabit a machine intended for gaming and inexplicable toolbars. You also need the Power Automate Desktop runtime installed. That’s the bridge connecting Copilot Studio’s cloud instance to your local compute environment. Without it, your agent is just shouting commands into the void.Install Power Automate Desktop from Microsoft, run the setup, and confirm the optional component called Machine Runtime is present. That’s the agent’s actual driver license. Skip that and nothing will register. Once it’s installed, launch the Machine Runtime app; sign in with your work or school Entra account—the same one tied to your Copilot Studio environment. The moment you sign in, pick an environment to register the PC under. There’s no confirmation dialog—it simply assumes you made the right decision. Microsoft’s version of trust.Step two—verify registration in the Power Automate portal. Open your browser, go to Power Automate → Monitor → Machines, and you should see your device listed with a friendly green check mark. If it isn’t there, you’re either on Windows Home (I told you) or the runtime didn’t authenticate properly. Reinstall, reboot, and resist cursing—it doesn’t help, though it’s scientifically satisfying.Step three—enable it for Computer Use. Inside the portal, open the machine’s settings pane. You’ll find a toggle labeled “Enable for Computer Use.” Turn it on. You’ll get a stern warning about security best practices—as you should. You’re authorizing an AI system to press keys on your behalf. Make sure this machine contains no confidential spreadsheets named “final_v27_reallyfinal.xlsx.” Click Activate, then Save. Congratulations, you’ve just created a doorway for an autonomous agent.Step four—confirm compatibility. Computer Use requires runtime version 2.59 or newer. Anything older and the feature simply won’t appear in Copilot Studio. Check the version on your device or in the portal list. If you’re current, you’re ready.Now, about accounts. You can use a local Windows user or a domain profile; both work. But the security implications differ. A local account keeps experiments self‑contained. A domain account inherits corporate access rights, which is tantamount to letting the intern borrow your master keycard. Be deliberate. Credentials persist between sessions, so if this is a shared PC, you could end up with multiple agents impersonating each other—a delightful compliance nightmare.Final sanity check: run a manual test from Copilot Studio. In the Tools area, try creating a new “Computer Use” tool. If the environment handshake worked, you’ll see your machine as a selectable target. If not—backtrack, because something’s broken. Likely you, not the system.It’s bureaucratic, yes, but each click exists for a reason. You’re conferring physical agency on software. That requires ceremony. When you finally see the confirmation message, resist the urge to celebrate. You’ve only completed orientation. The real chaos begins when the AI starts moving your mouse.Section 3: Watching the AI Struggle (and Learn)Here’s where theory meets slapstick. I let the Copilot agent run on a secondary machine—an actual Windows laptop, not a sandbox—and instructed it to open my Power App and send a university invite. You’d expect a swift, robotic performance. Instead, imagine teaching a raccoon to operate Excel. Surprisingly determined. Terrifyingly curious. Marginally successful.The moment I hit Run, the test interface in Copilot Studio showed two views: on the right, a structured log detailing its thoughts; on the left, a live feed of that sacrificial laptop. The cursor twitched, paused—apparently thinking—and then lunged for the Start button. Success. It typed “Power Apps,” opened the app, and stared at the screen as if waiting for applause. Progress achieved through confusion.Now, none of this was pre‑programmed. It wasn’t a macro replaying recorded clicks; it was improvisation. Each move was a new decision, guided by vision and reasoning. Sometimes it used the Start menu; sometimes the search bar; occasionally, out of creative rebellion, it used the Run dialog. The large language model interpreted screenshots, reasoned out context, and decided which action would achieve the next objective. It’s automation with stage fright—fascinating, if occasionally painful to watch.Then came the date picker. The great nemesis of automation. The agent needed to set a meeting for tomorrow. Simple for a human, impossible for anyone who’s ever touched a legacy calendar control. It clicked the sixth, the twelfth, then decisively chose the thirteenth. Close, but temporal nonsense. Instead of crashing, it reasoned again, reopened the control, and kept trying—thirteen, eight, ten—like a toddler learning arithmetic through trial. Finally, it surrendered to pure typing and entered the correct date manually. Primitive? Yes. Impressive? Also yes. Because what you’re seeing there isn’t repetition; it’s adaptation.That’s the defining point of agentic behavior. The AI doesn’t memorize keystrokes; it understands goals. It assessed that manual typing would solve what clicking couldn’t. That’s autonomous reasoning. You can’t script that with Power Automate’s flow logic. It’s the digital equivalent of “fine, I’ll do it myself.”This unpredictable exploration means every run looks a little different. Another attempt produced the right date on its third click. A third attempt nailed it instantly but missed the “OK” button afterward, accidentally reverting its work. In each ru Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support [https://www.spreaker.com/podcast/m365-show-podcast--6704921/support?utm_source=rss&utm_medium=rss&utm_campaign=rss]. Follow us on: LInkedIn [https://www.linkedin.com/school/m365-show/] Substack [https://m365.show/]