Cloud migration offers numerous benefits, but it also comes with its own fair share of challenges Delve into the most common challenges faced in cloud migration.
Cloud migration offers numerous benefits, but it also comes with its own fair share of challenges Delve into the most common challenges faced in cloud migration.
Let me paint you a picture: we were managing the Microsoft 365 environment for one of our long-time customers, let’s call them Company A. We had everything running smoothly—mailboxes organized, Teams working like a charm, and security policies in place. You know, the usual IT perfection (or at least close enough!). Then, out of nowhere, Company A gets bought by Company B—another company that was also using Microsoft 365 but on a completely different tenant. I mean, what’s better than managing one tenant? Managing two, of course! 🙄
The plan was simple: assess the current environment, plan the migration, and move over a few hundred—okay, maybe a few thousand—users from Tenant A to Tenant B. Easy, right? Well, it would have been if the CEO of Company B (now CEO of both companies) hadn’t decided to send a heartfelt, company-wide welcome email to all employees from Company A. You know, one of those, “Welcome to the family, let’s make magic happen together” emails.
Sounds nice, right? Except that for some reason, this email didn’t land in everyone’s inbox. Oh no, it decided to take a detour straight into the junk folder of several employees in Tenant A. And of course, it couldn’t be just anyone. Nope—it’s always the CEO, CFO, or some other high-level executive who faces this kind of issue. Why is it always the top brass? I’m convinced it’s the universe’s way of keeping us humble
So there we were, tasked with quietly and efficiently moving the CEO’s email out of the junk folder and into the inbox—without raising any eyebrows, of course. No one needs to know that the new CEO’s warm welcome was rejected by the company’s spam filter
That’s where the Microsoft Graph API comes in to save the day (and our sanity). In this blog, I’m going to walk you through how we used the Graph API to find those misplaced emails and move them to the inbox, all without anyone even noticing. You’ll get code samples, tips, and maybe a few laughs along the way—because, let’s be honest, if we can’t laugh at our IT woes, what else can we do?
Stick around, and I’ll show you how to become the email-moving ninja your CEO desperately needs. Ready? Let’s dive in!
Alright, let’s get into the nitty-gritty of how we’re going to rescue those poor, misplaced emails from the junk folder using the Microsoft Graph API. Before we start flipping bits and bytes, here’s what we’ll be doing (and don’t worry, I’ll walk you through it step by step—funny analogies included).
Before we can start shuffling emails from the junk folder to the inbox, we need permission. Think of it like trying to get into a fancy club—you need to show your VIP pass at the door. In our case, that VIP pass is the OAuth2 access token, which lets us call the Microsoft Graph API to interact with users’ mailboxes.
In this step, we’ll be:
Once we’ve got our access token (a.k.a. the keys to the castle), it’s time to go email hunting. The good news is, the Graph API is like a professional detective—it’ll help us track down those misplaced CEO emails that thought they could hide in the junk folder.
We’ll use the API to:
Think of it like finding that one sock that always goes missing after laundry day. You know it’s there somewhere, hiding in plain sight.
Now that we’ve found the elusive CEO email in the junk folder, it’s time to move it where it rightfully belongs—the inbox. This is the digital equivalent of putting socks back in the sock drawer after laundry day. It’s a simple act, but one that makes all the difference in avoiding chaos. 😅
In this step, we’ll:
Finally, we’ve got to make sure all this happens without anyone noticing. No one needs to know that their brand-new CEO’s heartfelt welcome email was considered digital garbage by the spam filter. We’ll move the emails in stealth mode—silent, efficient, and completely under the radar
In this step, we’ll:
Because the last thing you want is for someone to ask, “Hey, why did the CEO’s email land in junk?”
Alright, warriors, the first step of our mission is to secure access to the Graph API—this is your golden ticket to all the inbox-saving power. But, like any good ninja, we don’t just barge in through the front door. We need to sneak in the right way by grabbing an OAuth2 token that’ll let us call the Graph API like pros. Ready to get your key to the castle? Let’s break it down:
To get started, you need to register your app in Azure Active Directory. This is where we create a stealthy identity for our app, which we’ll use to request the magical token that gives us access.
Now that we’ve registered the app, we need to give it the right
permissions
to read and move emails. Because without the right permissions, our ninja tools are pretty much useless.
Now your app has the power it needs to read and move emails. Pretty cool, right? 🔥
Next up, we need to create a Client Secret. This is like your app’s katana—it’ll let you authenticate and request access tokens when you call the Graph API.
Your token is your pass to the API, and just like any secret tool in your ninja arsenal, you need to protect it. This token is typically valid for 60 minutes, so make sure you refresh
it before it expires.
What This Script Does?
What This Script Does?
Before we start rummaging through users’ junk folders, we need to authenticate with the Graph API. This is done using OAuth2, and the script will request an access token by passing in the ClientID, TenantID, and ClientSecret of our Azure AD app.
Here’s the function that handles this for us:
This function sends a request to Azure AD, asking for a token that gives us permission to access users’ mailboxes. You’ll need to replace , , and with your actual values from your Azure AD app registration. This token is our “all-access pass” to the Graph API. Fun fact: Getting this token feels like having the master key to the building…except this key only opens inboxes and junk folders. 🗝️
To avoid manually specifying each user, this script reads a list of users from a text file. Each email in the file will be processed in turn. Here’s how we grab that list of users:
Each user’s email address should be listed on a new line in the text file. The script will iterate over this list and handle junk email detection for each user. It’s a nice bulk operation—no need to handle one user at a time.
Now, for each user in our list, we’ll search their JunkEmail folder for any messages that match the specified subject. We’re using the Microsoft Graph API to do this
This part of the script constructs the Graph API URL that targets the JunkEmail folder for a particular user ($userEmail). The ?$filter=subject eq ‘$emailSubject’ part filters the emails to only those matching the subject you specify.
It’s like being a ninja detective, scanning for emails that don’t belong in the shadows of the junk folder. 🥷📧
Once we’ve located the email in the junk folder, we need to move it to the inbox where it belongs. Here’s how we do that:
Here’s what happens in this block:
The script gives you live feedback about whether it found an email and successfully moved it. This way, you can monitor what’s happening and make sure the operation runs smoothly. You’ll know exactly what’s going on, and you can intervene if something looks off.
And there you have it! With just a few lines of PowerShell and the power of the Microsoft Graph API, you’ve become a master of email movement, whisking important messages out of the junk folder and into the inbox—all without breaking a sweat.
This script is especially handy if you’re managing multiple users and don’t want to dig through each junk folder manually. Now, you can let PowerShell and the Graph API do the heavy lifting while you take the credit for saving the day.
So next time a CEO’s email ends up in the junk folder, you’ll be ready. Just don’t forget to add this to your IT ninja toolbox! 🥷✨
Have any questions or issues? Drop them in the comments below, and let’s troubleshoot together!
Tune in for a deep dive into overcoming hurdles in cloud adoption using real-world solutions from our co-founder, Vineet Arora.
As a trusted Power BI partner, Mismo Systems is dedicated to empowering organizations with comprehensive business intelligence and data visualization solutions. We specialize in helping businesses across India—including Delhi, Noida, Bangalore—and the USA transform their data into actionable insights through Microsoft Power BI. Our partnership ensures that clients get the most out of Power BI’s powerful tools, enabling them to make informed decisions, enhance operational efficiency, and drive growth.
Mismo Systems delivers Power BI solutions to businesses in major Indian cities such as Delhi, Noida, Bangalore, as well as in the USA. Whether you are a small enterprise or a large organization, we work closely with you to unlock the full potential of your data. As your Power BI partner, we offer a range of services designed to meet your analytics and reporting needs, allowing you to focus on what matters most—growing your business.
As your trusted Power BI partner, Mismo Systems is committed to helping you unlock the full potential of your data. From integrating data sources to creating powerful visualizations, we empower your business with the tools and insights needed for growth and success.
Contact us today to learn how our Power BI partnership can help you achieve your data-driven goals.
2. Seamless Integration with Microsoft Teams
As remote and hybrid work models become the norm, integration with collaboration tools is more important than ever. Power BI’s integration with Microsoft Teams allows you to:
– Embed interactive Power BI reports directly in Teams channels and chats
– Collaborate on data analysis in real-time with colleagues
– Share and discuss insights without leaving the Teams environment
– Set up data-driven alerts within Teams
Best Practice: Use the Power BI tab in Teams to create a centralized location for your most important reports and dashboards, making it easy for team members to access critical data within their daily workflow.
The screenshot below shows how a Power BI report can be viewed on a Microsoft Teams:
3. Automated Report Distribution with Subscriptions
In the high-stakes world of business, staying ahead means staying informed. But let’s face it: nobody dreams of waking up to a flood of reports. That’s where Power BI’s subscription feature comes in, turning information overload into actionable insights at a glance. Instead of drowning in data, decision-makers can now receive a concise snapshot of their most critical metrics right when they need it – whether that’s with their morning coffee or just before a crucial meeting. This smart approach to information sharing ensures that key stakeholders are always equipped with the latest data, without the need to dig through dashboards or lengthy reports. Power BI’s subscription feature allows you to:
– Schedule automatic delivery of reports and dashboards via email
– Set up different subscription schedules for various stakeholders
– Send snapshots or links to live reports
– Manage subscriptions centrally for better control and governance
Best Practice: Use row-level security in combination with subscriptions to ensure that each recipient only receives the data they’re authorized to view.
The following screenshot displays the interface for setting up a Power BI report subscription and how the subscription email come in your inbox.
4. Proactive Insights with Data Alerts
To truly excel, businesses need proactive tools that offer real-time insights and early warnings. Power BI’s data alert feature is designed precisely for this purpose, helping you stay ahead of the curve by: automatically notifying you of critical changes and anomalies in your data, allowing you to address issues before they escalate, and making informed decisions with up-to-date information. Power BI’s data alert feature allows you to:
– Set up custom alerts based on specific metrics or KPIs
– Receive notifications when data changes meet your defined criteria
– Configure alert sensitivity to avoid notification fatigue
– Share alerts with team members for collaborative monitoring
Best Practice: Start with a few critical metrics for alerts and gradually expand. This helps prevent alert overload and ensures that notifications remain meaningful and actionable.
The screenshot below illustrates the process of creating a data-driven alert in Power BI:
Overview
Power BI service offers a comprehensive suite of features that cater to the complex needs of enterprise analytics. By leveraging mobile access, Teams integration, automated subscriptions, and proactive alerts, organizations can foster a data-driven culture that empowers employees at all levels to make informed decisions.
As you implement Power BI in your organization, remember that successful adoption goes beyond just the technology. Focus on user training, establish clear data governance policies, and continuously gather feedback to refine your analytics strategy.
By harnessing the full potential of Power BI service, your organization can transform raw data into actionable insights, driving innovation and maintaining a competitive edge in today’s fast-paced business landscape.
In today’s data-driven business landscape, enterprise analytics plays a crucial role in informed decision-making and maintaining a competitive edge. Microsoft’s Power BI service has emerged as a powerful tool for organizations seeking robust, scalable, and user-friendly analytics solutions. This blog will delve into some of the key features that make Power BI service an excellent choice for enterprise analytics, with a focus on accessibility, integration, and proactive insights.
In an increasingly mobile world, the ability to access critical business insights anytime, anywhere is paramount. Power BI’s mobile app brings the full power of your analytics to your smartphone or tablet, enabling you to:
– View and interact with dashboards and reports
– Set up mobile-optimized views of your reports
– Annotate and share insights directly from your device
– Use natural language queries to get quick answers
To get started with the Power BI mobile app, simply download it from your device’s app store. Once installed, log in with your work email address to access your workspace and Power BI reports. This seamless integration ensures that you have the same secure access to your data on mobile as you do on your desktop, maintaining data governance and security protocols.
As remote and hybrid work models become the norm, integration with collaboration tools is more important than ever. Power BI’s integration with Microsoft Teams allows you to:
– Embed interactive Power BI reports directly in Teams channels and chats
– Collaborate on data analysis in real-time with colleagues
– Share and discuss insights without leaving the Teams environment
– Set up data-driven alerts within Teams
In the high-stakes world of business, staying ahead means staying informed. But let’s face it: nobody dreams of waking up to a flood of reports. That’s where Power BI’s subscription feature comes in, turning information overload into actionable insights at a glance. Instead of drowning in data, decision-makers can now receive a concise snapshot of their most critical metrics right when they need it – whether that’s with their morning coffee or just before a crucial meeting. This smart approach to information sharing ensures that key stakeholders are always equipped with the latest data, without the need to dig through dashboards or lengthy reports. Power BI’s subscription feature allows you to:
– Schedule automatic delivery of reports and dashboards via email
– Set up different subscription schedules for various stakeholders
– Send snapshots or links to live reports
– Manage subscriptions centrally for better control and governance
To truly excel, businesses need proactive tools that offer real-time insights and early warnings. Power BI’s data alert feature is designed precisely for this purpose, helping you stay ahead of the curve by : automatically notifying you of critical changes and anomalies in your data, allowing you to address issues before they escalate, and making informed decisions with up-to-date information. Power BI’s data alert feature allows you to:
– Set up custom alerts based on specific metrics or KPIs
– Receive notifications when data changes meet your defined criteria
– Configure alert sensitivity to avoid notification fatigue
– Share alerts with team members for collaborative monitoring
The screenshot below illustrates the process of creating a data-driven alert in Power BI:
Power BI service offers a comprehensive suite of features that cater to the complex needs of enterprise analytics. By leveraging mobile access, Teams integration, automated subscriptions, and proactive alerts, organizations can foster a data-driven culture that empowers employees at all levels to make informed decisions.
As you implement Power BI in your organization, remember that successful adoption goes beyond just the technology. Focus on user training, establish clear data governance policies, and continuously gather feedback to refine your analytics strategy.
By harnessing the full potential of Power BI service, your organization can transform raw data into actionable insights, driving innovation and maintaining a competitive edge in today’s fast-paced business landscape.
This blog post is in continuation to “Why Migrate Legacy Applications to Containers and What are the Challenges this Brings?” where we dove into the transformative world of containerization and learnt why migrating your legacy applications to containers not only future-proofs your infrastructure but also enhances scalability, efficiency, and consistency.
In this part, unravel the complexities of planning a successful migration to containers. From assessing your applications to choosing the right tools, get expert insights into each step of the planning phase.
The migration starts with an assessment of existing applications. It is a very exploratory venture. This step is really key, as it tells which applications are going to be the best fit for containerization and which are likely to need too much alteration. Here’s the process of conducting this assessment:
• Application Inventory: Inventory of all applications and services that are running in the current environment. The inventory should be covering the software details, version of the software, underlining infrastructure, dependencies, and usage statistics.
• Dependency Mapping: Create detailed dependency maps for each application, including libraries, external services, and data stores they communicate with. Define and create such relations in a container environment using a tool like Docker Compose.
• Identify Probable Candidates for Challenges: Search for anything that can act as a hindrance to your migration, such as tightly coupled components, stateful applications, or compliance requirements that might drive what applications need re-architecture or migrate first.
In considering a transition to containers, some really key things are identified in terms of the tools and platforms. Docker and Kubernetes are the most popular, but they carry different purposes:
• Docker: This is an accompanying tool in running containers, which empowers users to create, deploy, and run them using simple commands and a Dockerfile. In controlling the lifecycle of the container and developing a container-based application in a local environment, Docker would be perfect.
• Kubernetes: While Docker orchestrates at an individual container level, Kubernetes does orchestration of containers at a larger scale. It does deployment, scaling, and management of containerized applications across clusters of machines. It has come out with all the prominence and importance in today’s production environments that call for high availability, escalation, and load balancing.
When choosing tools, consider:
• Compatibility: Ensure the tools integrate well with your existing CI/CD pipelines and development workflows.
• Scalability: Always go for tooling that will scale with the demands of your application. For example: In case your deployment is of large scale, then Kubernetes is a brilliant tool for that.
• Community Support: Prefer options that have strong community support and documentation, if available and reflect reliability and long-term viability.
Approaching migration with a structured strategy can greatly enhance the process:
• Start Small: Make sure to use the lowest criticality or simpler applications first. This will enable you to both manage your risks and learn from the process without impacting major systems.
• Pilot Projects: Pilot migration projects provide valuable feedback. Choose a project characteristic for a typical application within an organization but carrying no significant business risk.
• Gradual scale-up: After your pilot project is successful, you can start to scale up your migration efforts very gradually. Learn from your mission-critical and more complex applications’ mistakes.
• Consider refactoring: Some applications may need refactoring before being containerized. For example, refactoring can mean that one would split a monolithic application into a set of microservices or make an application stateless if possible.
Ensuring your team is container-ready is as important as the technical migration aspects. Provide training to upskill the existing team on resources available over the internet on container technologies and Kubernetes. For example, there are a number of online platforms providing courses related to this from introductory to an expert level.
Of course, this would be very strategic to bring in an external organization to help in the shifting of legacy applications to containers. This brings out a number of advantages that would help in smoothening the process, reducing the risks, and realizing more benefits from the move into a containerized environment. Here are some compelling reasons and advantages for enlisting external expertise:
Expertise: Providing years of expertise around container technologies and their migration to success across many industries. They bring experience
involving best practices and potential pitfalls your migration can be involved in.
Stay Abreast with Technology: That’s the sure deal that your solutions are in line with advancements in technology, such as new developments in
containerization and orchestration tools like Docker and Kubernetes. In essence, you will be able to implement the best and efficient state-of-the-art
solutions.
• Resource Allocation: Outsourcing ensures that most of the technical complexities involved in the migration are offloaded; this enables your
internal teams to remain focused on the core business functions rather than drift into the many demands of a complex migration project.
• Reduced Learning Curve: Your staff does not need a couple of days or weeks to train in order to be up-to-date with container technology. The outsourced team will help fill the skills gap and assist your business in adaptation to new technologies much quicker and more productively.
• Tried-and-Tested Methodologies: This would mean that, while the provider’s internal team might have much more knowledge of an organization’s IT setup, they would use proven methodologies—developed over many projects—as a much better insurance policy against risks
• Unchanging support: They provide unchanging support and maintenance post-migration, which helps in very quickly getting issues resolved and making iterative improvements to the infrastructure.
• Predictable Spending: The cost of outsource teams may be lower than developing an internal team, for there would be added costs from the companies involving recruitment, training, and the retention of services from experienced IT practitioners.
• Scalability: The outer crew can increase their services according to your project needs. This is much more flexible in comparison to hiring employees on a full-time basis, and therefore, much better budget control is allowed.
• Faster Timeframe: Having expert external teams with relevant experience and resources will make a huge difference to the timeframe it takes to
complete the migration. This will be enabled by the tools and processes they have, making it easy to transfer the applications with minimal disturbances from the day-to-day operations.
• Immediate impact: from improved scalability, better efficiency, and improved operational flexibility, these benefits of the rapid deployment bring the
containerization in the organization’s life sooner than later.
• Unbiased Recommendations: Get the unbiased recommendations for your IT infrastructure or those even changes that your team may overlook.
• Solutions Tailored for You: They bring their knowledge of serving the tailored solutions that fit differing organizational needs and constraints to perfection. So, the migration strategy aligns spot-on with your business goals.
At Mismo Systems, we understand that migrating your legacy applications to containers can seem daunting. That’s why our team of experienced engineers is dedicated to simplifying your transition, ensuring a smooth and efficient migration process. With our expertise, you can unlock the full potential of containerization to enhance scalability, efficiency, and deployment speed.
• Expert Guidance: Our seasoned engineers guide you through the entire migration process, from initial assessment to full-scale deployment, ensuring your business achieves its strategic goals with minimal disruption.
• Customized Solutions: At Mismo Systems, we don’t believe in one-size-fits-all answers. We create tailored solutions that fit the unique needs of your business and maximize your investment in container technology.
• Ongoing Support: Post-migration, our support team is here to help you manage your new containerized environment, from optimizing performance to implementing the latest security protocols.
If you’re ready to transform your legacy applications with containers, Mismo Systems is your go-to partner. Contact us today to learn more about how we can lead your business into the future of technology.
At this point, you should have a pretty good foundation under you for planning your migration to containers. Remember that the steps above will help ensure that you are not just transitioning properly but in a manner that is sustainable.
Developers work on the code which is stored in a code repository. Code repository can be GitHub, AWS CodeCommit etc. As developers keep making changes to the code and push to the code repository, a build server builds the code and runs the tests. Build Server can be AWS CodeBuild, Jenkins etc.
This process is called continuous integration. Developers focus on developing code and not building and running tests. It helps to identify and fix bugs faster and have code available for frequent releases.
With Continuous integration, you have automated the code build and testing. The next step is to deploy the code. For this, you can use a deployment server which can be AWS CodeDeploy, Jenkins etc. The deployment server will take the code from the build server and push the code to the test/prod environment.
With Continuous delivery, you will have a manual step to approve the deployment. The deployment will be automated and repeatable. With Continuous deployment, no manual steps are required, and deployment will be fully automated.
In practical scenarios, continuous deployment can be used to push the deployment to test & UAT servers while manual approval can be used for production deployment.
Also Read:- Breakout Rooms and Its Usage – Microsoft Teams
Code Commit can be used as a private code repository for version control for collaboration, backup and audit. It includes all the benefits of AWS i.e., Scale, Security, Compliance and integration with other services including AWS Code Build, Jenkins etc. You can use GIT to integrate your local repository with the Code Commit repository. You can configure role-based access, notifications and triggers. For e.g. You can configure a trigger to execute a lambda function for automation.
Code Build A fully managed build service can be an alternative to tools like Jenkins. It has all the benefits of a managed service i.e., scale, security and no maintenance overhead and power of integration with services like Cloud Watch for notifications & alerts and Lambda for automation. It uses Docker containers under the hood (you can use your own docker image as well), is serverless and pure Pay as You Go (PAYG).
Code Deploy managed service by AWS is to deploy code on EC2 instances or on-premises machines. Code deploy can be used instead of tools like terraform, ansible etc. if it meets your requirement of continuous deployment. You can group the environment such as prod, dev etc. Code deploy will not provide resources for you. Code deploy agent will be running on the server/EC2 instance and will perform the deployment.
Code Pipeline to orchestrate the whole deployment. It supports code repositories such as GitHub, Code Commit, build tools such as Code Build, Jenkins, deployment tools such as Code Deploy, Terraform, and load testing tools. It creates artefacts for each stage.
All these services can easily use powerful management and monitoring tools like CloudWatch for logging and monitoring.