Migration Poc

1

How to start working with us.

Geolance is a marketplace for remote freelancers who are looking for freelance work from clients around the world.

2

Create an account.

Simply sign up on our website and get started finding the perfect project or posting your own request!

3

Fill in the forms with information about you.

Let us know what type of professional you're looking for, your budget, deadline, and any other requirements you may have!

4

Choose a professional or post your own request.

Browse through our online directory of professionals and find someone who matches your needs perfectly, or post your own request if you don't see anything that fits!

Many of those J2EE apps have a good history and still offer good service to customers. The app might not have been improved incrementally in the years, and that could happen because the system has been stuck in decommissioned APIs. The people behind it don't have a lot of experience, and it has mission-critical functionality and nobody wants to touch it too much. Containerize as it exists migrate and containerize or just ignore it.

Designing a simple CRUD microservice

Not everyone is a polyglot, and even those who are, don't all have the right language for this task. In many cases, it's fine to create a small microservice that offers only CRUD operations from a simple SaaS data store. You will not be using event sourcing or NoSQL databases here.

Do you have an app that needs to be migrated and containerized?

You might think it’s impossible, but we can help. We use the latest technologies to migrate your legacy apps into modern containers so they can run on any cloud or server. Our experts will work with you every step of the way as we build a custom migration plan for your business.

If you want to keep running your existing app without having to invest in new infrastructure, our team is here for you! We’ll make sure everything runs smoothly and efficiently so there are no interruptions in service. Plus, if something does go wrong, we’re just a phone call away 24/7/365. No matter what happens, our support team will always be there for you when you need us most!

Designing a web application Microservice

Polyglot development offers advantages in terms of cost and time-to-market advantage. But one should consider whether the return on investment justifies the added learning curve for developers. There are clear benefits when building APIs with Java EE or Spring Boot vs doing it with NodeJS, but is there an advantage in migrating existing code bases overnight?

Microservices Adoption Curve

A common misconception is that microservices are for greenfield only. The reality is that the majority of companies will not want to completely abandon their old technologies in favor of Spring Cloud, Netflix OSS stack, or Scala & Play! It's unlikely they will be able to do this overnight. So the priority should be to migrate services at a pace that is compatible with business goals and doesn't cause too much disturbance in the IT department.

The bottom line

Not every app can, or should, be migrated into a perfect microservice architecture overnight. The long-term benefits are clear but it's usually better to start small while still delivering value types of services today while you modernize other parts of your infrastructure slowly. Containers provide an excellent vehicle for this incremental approach.

When you consider moving to containers, it's important to weigh up the number of potential benefits against the costs and the current constraints. A small project is much easier than migrating an entire legacy application with monolithic architecture dependencies. The adoption curve also presents its challenges because by definition you will be dealing with several different technologies that are only starting to emerge as stable platforms for larger organizations.

The DB connection string and environment variables used by Docker Containers

Should ideally, be collected and managed in a configuration repository such as the one offered by Apache Atlas. This is needed so you can recreate the containers and networks whenever necessary. It's also possible to reference these environment variables from other applications which may need them, or even set up a private Docker registry and push your images there and pull them into development environments that share the same artifact repository.

In terms of deployment automation, it's important to look at the whole pipeline from the CI/CD tool down to testing, promotion, etc. In larger organizations with multiple teams working on several services at once, this becomes more difficult because different people may want to run their code independently against different infrastructure configurations. The use of container orchestration tools like Kubernetes can help reduce the burden on each team as it balances were to run container instances and how to link them together.

Containers for Microservices - Migration Practice

Migration is a daunting task, especially if you're working with legacy applications that may use outdated versions of Java or NodeJS. It's important not to be too ambitious; it might make sense to target only certain components and test them first before rolling out changes company-wide. You should consider broadening your horizons: find other apps in the same codebase and move them along as well. The benefits will outweigh any challenges caused by modernizing other parts of your infrastructure at a slower pace.

Containerize or decommission?

There comes a time when you need to re-evaluate the tools and technologies you use. It might be time to migrate your development environment if, for example, you've been using unsupported versions of Apache Tomcat or NodeJS which are no longer receiving security updates. You should also consider containerizing existing applications that have dependencies on monoliths with large memory footprints.

Container technology can make a huge difference when migrating legacy apps but it's important to weigh up the benefits against any unforeseen challenges. One option is not necessarily better than another, but containers can provide an excellent vehicle for organizations looking to modernize their infrastructure over time. Make sure you choose your first step wisely because it will set the tone for everything that follows after that!

POC migration projects

Should be more about functionality and less about technology. Don't focus on the wrong things; it's unlikely you'll be able to open source a POC, but if anything it should aim to bring together your team by bonding with the product owner over a shared mission.

In most development projects nowadays, I'm seeing that there is an obvious trend towards containerizing legacy applications. Containers provide developers with an excellent opportunity to modernize their infrastructure while gradually replacing components with lightweight services. But at the same time, each step along the way should ideally have a measurable business benefit – imagine how much value can be created from cost reduction or optimizing performance – rather than focusing too much on individual technologies themselves. At this point, moving from monoliths to microservices and migrating legacy applications is still very much a hot topic – albeit one that we should all be thinking about – but the question I'm most frequently asked is: "Should we move all our apps inside Docker containers?"

Containerized development process + POCs will benefit from containerization. But, there's no need to do it everywhere at once. Make sure your first step is successful!

I think it depends on what you want to achieve with this migration project. If you're trying to boost your team's morale by working on a shared mission, then POCs will probably benefit from using a containerized approach because there's less overhead involved in spinning up new images for each task. On the other hand, POCs which aim to prove technical capabilities will probably be better off with virtual machines since there are fewer dependencies involved. But make no mistake about it - containerization is the way forward and the benefits of using modern tools like Docker cannot be denied!

Required changes for a migration

The modern approach to development is all about small, independent services. Containerizing legacy apps and splitting out new microservices architecture will mean you'll need to make changes everywhere – not just in your application servers! For example, you might also need to adjust the way log files are stored and monitored. On top of that, it's still important to keep up-to-date with what's going on in the developer community: new security vulnerabilities or bugs may arise at any time and you need to be prepared for this eventuality.

POCs can be containerized as long as there is low overhead involved in the change of process. This would probably work well if the POC focuses on a shared goal between team members rather than technical capabilities.

Containerization for legacy applications can provide significant benefits to developers. It's important to be aware of all dependencies you'll have once the migration is complete. This will happen in both your application servers and business components so try to plan!

If you're considering containerizing a POC then it's likely that you've already made some progress within your development team. The next step should be gaining buy-in from the business, which means you need to evaluate how this would benefit them before committing vast resources into migrating everything at once. If your POC has been successful, then chances are there's already plenty of support available from employees who are excited by the potential changes involved. If not, back up a little bit and make sure you get everyone involved in the initial stages.

Be careful not to get lost on your way, though! You may have heard of other companies who are containerizing their entire infrastructure but this doesn't necessarily mean it's the best approach for you. As I mentioned before, each project should be evaluated individually and at this early stage, there's no need to make changes everywhere if it means over-complicating or disrupting existing processes. Just because others are doing it that way doesn't mean you have to!

When considering migration projects, remember that anything with a technical focus will probably benefit from virtual machines rather than containers. Containerization works great when you're trying to achieve something, such as boosting morale or working towards a shared goal, but not so much for technical POCs.

Generating Swagger description metadata from your ASP.NET Core Web

API service is a great way to create self-documenting APIs (if you're into that kind of thing and I know you are!)!

Swagger Editor provides an easy way to let your API consumers try out different request & response data access. It uses the Swagger 2.0 specification which was developed by SmartBear Software architecture and has been adopted as the de facto standard for describing RESTful APIs. Simply publish your Web API project, run it on a local address, copy this URL into Swagger Editor and instantly have access to any publicly accessible functions, all without leaving your browser! You can write tests against these endpoints too if you wanted - no need to mock up responses anymore!

You also have the option of generating full documentation with even more information about your services, such as the definition, description, and security model. I've even managed to host it on Github pages without too much trouble so check out this description of my Web API gateway project.

There are several ways you can achieve this but for me, it was as simple as running the Swashbuckle NuGet package in Visual Studio 2017 then updating my code with a few attributes. The other thing that helped was understanding what the meta-data access cases looked like before using it within other tools - you can generate XML or JSON documents from these data repositories if required but going forward, I'll be sticking to Swagger!

Phase of assessment

-    The team creates a POC that provides the possibility of containerizing an old, legacy application.

-    This is not a technical assessment but more of a feasibility study for moving forward with the proposed changes. After publishing the test via Swagger Editor, feedback from business and remote user interface will be evaluated to see if containerization is worth pursuing or should be put off for later.

Another good thing about containerizing a legacy application is that it forces developers to work through existing code and improve any issues they can find along the way. After all, everyone knows you should never deploy something without testing it first! In this instance then, having a plan in place for managing those dependencies as part of your CI/CD process could save you a headache later on.

Let's Get Ready to Containerize!

-    The team reviews its current CI/CD process and makes any necessary changes before attempting to containerize an application. This is important because you don't want your build environment getting confused about where dependencies come from - if the POC works perfectly in a local VM, chances are it'll work in production too! You may find that you've already got everything you need available in Azure or another cloud provider but if not then consider adding these into your pipeline as soon as possible.

A major part of your CI/CD process when using containers will involve building images containing the artifacts required for deployment, including third-party libraries or special configuration settings, which can take up quite a lot of space on your build environment. Make sure you've got enough available data storage to accommodate this before continuing!

-    The team creates an architecture for its CI/CD process, including the necessary scripts required for building containers and deploying them to staging/production environments. This will involve communicating with external services (such as DockerHub) so be sure to check that API credentials are available or instructions on how they can be generated if not.

-    The team ensures that its processes work correctly in a local VM. It's very common for errors to occur when trying out new technologies during development (trust me, I know!) but having this work consistently across different machines is important. You want to make sure you don't run into issues like configuration files not being found, scripts failing to execute correctly or services crashing and leaving your legacy system in an unknown state.

-    The team reviews and updates its existing (or non-existing) documentation for deploying services before containerizing anything. This will involve adding things like network topology diagrams detailing how different services should be connected as well as detailed information about what happens when you update/replace a service during runtime (e.g., does it require live migration effort or can you simply restart it?)

Phase of implementation

-    A proof of concept is implemented using Docker containers with the team closely monitoring its progress. Feedback from business logic users about the application is gained using Swagger endpoints to understand what they are hoping to achieve by migrating into a containerized world.

-    Once the business team has given its feedback, it's time to evaluate whether something is worth moving forward with or not; if not then continue building out the codebase without applying containerization until its inevitable obsolescence date comes around!

-    The team applies containerization to service and implements a successful test in staging before pushing anything live. Be sure to communicate your rollout plan well beforehand so that users know what to expect when attempting to access an endpoint using an old version of your application.

-    The team takes the lessons learned from the first migration and applies them to other services within the architecture. This should be much easier than doing everything at once because there will be plenty of documentation and tooling in place to ensure that everything goes smoothly.

-    The team starts migrating on-premise solutions by replicating the architecture in Azure (or wherever your target environment is hosted) using containers and orchestrating them using Kubernetes (this will require setting up an autoscaling infrastructure). If you can, try building out your endpoints with serverless technology too because it reduces operational overhead and frees up resources for other tasks!

Phase of decommissioning

-    Once all services within the application have been containerized (and possibly migrated onto premises), the team begins migrating legacy components still hanging around in VMs or physical machines - if possible, try moving these into Docker too so that they're useful for people who want to hack on your codebase.

-    The team implements an automated process for migrating legacy components to containers/VMs and deletes the old instances from production (this can be done gradually, of course).

-    As more and more teams adopt the use of containers, the existing solutions will become less relevant (simply because they're so old) so it's time to decommission these too!

There are many other tasks involved in containerizing applications which I've not mentioned here but hopefully, this list provides you with a good starting point for making sure everything is documented before you even begin the first migration. Remember that failing to document anything makes future migrations much harder because people need to know how things work to fix them when things inevitably go wrong!

Nice changes (here for completeness)

I would simplify the main "phases" as:

-  Proof of concept as a tutorial, just for fun and to gather feedback. the prototype is not production-ready but enough to be useful

-    Implement in staging (for business team feedback). *may* work on production, we need to know if we can do it and the business (or the person running operations) will need time and budget to make it happen.

-    Migrate everything with containers and orchestrator (like k8s), no VMs involved! Watch out for networking, storage availability/persistence, backup, monitoring visibility. Migrating an old solution (current VM or physical server) is usually pretty easy since all of the infrastructures are already in place.

Migration for one team may take months, migrating everything can be done in a few days depending on your workload and of course on the size of your application! Each step must have its documentation which will help you with migration or simply become useful if someone wants to re-use them for other projects.

Once all services are migrated (or even better, on-premises) it's time to remove old solutions until only the containers/K8s cluster remain! If everyone followed these common steps then migrations will be simple, efficient, and cheap.

For on-premise resources, I'd recommend first building some VMs locally (with Docker) following an ARM template provided by Azure since they provide a complete environment with a lot of stuff you don't need for your migration. Once everything is working locally, you can point to Azure to test it out in your live environment BUT if anything breaks then the nice templates will help troubleshoot problems faster!

In VMs vs containers/K8s comparison, containers have advantages over VMs but also disadvantages which must be taken into account just like when building solutions. Containers may increase infrastructure costs because there are usually more resources assigned per container than a VM (for instance a single-core 1GB RAM for each container). If the application only needs 500MB of RAM and one CPU, then two containers would be necessary instead of a single VM to get the same performance. In this example, all of that RAM/CPU is left idle but there are other cases where the application needs it.

Applications are usually designed for physical servers so if you're planning to migrate them to VMs then they'll most likely require more resources since they weren't tested on "hyper-converged monolithic systems". Another problem is with applications that used to rely on different services (IaaS) deployed by architects who have moved to containers now. It's hard for developers to replace those with containerized solutions because business owners usually don't want their private migrated data access layer in a public repository! This isn't a migration problem, just a side effect of how businesses work.

Which features should I migrate first?

I'd say your team should have a strong understanding of everything you have before deciding what to migrate first. If it's only one service then focus on the migration, if it's multiple services then do an inventory of current resources and future needs so you don't spend time migrating stuff that won't be used!

It helps to know how the infrastructure is currently built because some infrastructures are built to run specific applications. This would happen if several related apps work together but also independently from each other (for instance, one does front-office while another takes care of back-office). When considering containerizing each app separately or having them inside their containers depends on the business-critical of each application since downtime will impact one of them more than the other (and your company's revenue, in the worst case).

I'd recommend starting with everything which can be done cheaply (no downtime) and you'll most likely migrate it quickly. After that, there will still be things to consider like their impact on business-critical applications or infrastructures. For instance, adding storage at one point could be very expensive since old servers cannot simply add new disks anymore! Generally speaking, if an application requires little resources then containers/K8s are great but when there is high demand for CPU/RAM then VMs with Hyper-V might be necessary.

As I said before containers have both advantages and disadvantages so it's always good to know more about them and decide whether they're the right thing for your company. If it's too hard, don't think nobody can help! Cloud-based solutions like Azure can be very helpful since you only get what you need and this is more than enough to migrate most solutions!

Geolance is an on-demand staffing platform

We're a new kind of staffing platform that simplifies the process for professionals to find work. No more tedious job boards, we've done all the hard work for you.


Geolance is a search engine that combines the power of machine learning with human input to make finding information easier.

© Copyright 2022 Geolance. All rights reserved.