Team of professionals

Back to all news

How many software development environments are needed and why?

In software engineering, a "software development environment" refers to a combination of processes, tools, and infrastructure that developers use to design, create, test, and maintain software. This includes everything from Integrated Development Environments (IDEs), such as Visual Studio, Eclipse, and IntelliJ, to foundational tools and libraries and even broader components like databases, servers, and network setups. Simply said, it denotes a particular set of infrastructure resources set up to execute a program under specific conditions.

As software advances through its life cycle, different environments address the unique requirements of the Development and Operations teams. Given today’s rapid and competitive digital business setting, development teams must fine-tune their workflows to stay ahead. An efficient workflow enhances team productivity and guarantees the delivery of prompt and reliable software.

Benefits of Harnessing Multiple Environments

Parallel Development

Software development often resembles balancing multiple tasks at once. While introducing new features, it’s vital not to disrupt a live application, potentially bringing in bugs, performance issues, or security vulnerabilities. While one part of the team might be fully occupied by crafting fresh features, another could be refining an existing version based on feedback from testing. Having segregated environments enables teams to work on different tasks without stepping on each other’s toes.

Enhanced Security

Limiting access to production data is crucial. By distributing data across various environments, we strengthen the security of production data and preserve its integrity. This reduces the chance of unintentional modifications to the live data during development or testing phases.

Minimized Application Downtime

These days, application stability and uptime are more crucial than ever. Customers expect and rely on consistent service availability. Repetitive disruptions might lead to losing a company’s reputation. By cultivating multiple environments and establishing rigorous testing, we position ourselves to launch robust and reliable software.

Efficient Hotfix Deployment

There are moments when a quick fix or enhancement must be rolled out with great speed. For such instances, having an environment that mirrors production closely and is free from ongoing feature development is invaluable. This dedicated environment facilitates quick feature or fix deployment, followed by testing, before a seamless transition to live production.

An In-Depth Look at Development Environments

As software evolves from an idea to a full-fledged application, it passes through various stages, each with its unique set of tools, protocols, and objectives. These stages, or environments, form the backbone of the development lifecycle, ensuring that software is crafted, refined, tested, and deployed precisely.

Local Development Environment

The initial stage of software development occurs in the local development environment. It acts as the primary workspace where developers initiate the coding process, often directly on their personal computers with a distinct project version. This setting allows a developer to construct application features without interference with other ongoing developments. While this environment is suitable for running unit and integration tests (with mock external services), end-to-end tests are typically less frequent. Developers commonly employ Integrated Development Environments (IDEs), software platforms offering an extensive suite of coding, compiling, testing, and debugging tools.

Integration Environment

At this stage in the development process, developers aim to merge their code into a team’s codebase. With many developers or teams working independently, conflicts and test failures can naturally arise during this integration. In expansive projects, where multiple teams focus on distinct segments (or microservices), the integration environment becomes the critical platform where all these separate functionalities come together. Additionally, integration tests may be adjusted here to ensure application stability. Different implementations of multiple teams (like API integration point adjustments) can often originate from the initial analysis stage. Furthermore, the challenge of locally developing cloud-native features emphasizes the integration environment’s essential role, highlighting distinctions between local setups and actual cloud operations.

Test Environment

Also known as the quality assurance environment, it employs rigorous tests to evaluate individual features and the application’s overall functionality. Tests range from internal service interactions (integration tests) to all-inclusive tests, including internal and external services (end-to-end tests). Typically, the test environment doesn’t demand the extensive infrastructure of a production setting. The primary goal is to ensure the software meets specifications and sort out any defects before they reach production. Organizations might optimize processes by combining the integration and test environments, facilitating simultaneous initial integration and testing.

Staging Environment

The staging or pre-production environment aims to simulate the production environment regarding resource allocation, computational demands, hardware specifications, and overall architecture. This simulation ensures the application’s readiness to handle expected production workloads. Organizations sometimes opt for a soft launch phase, where the software goes through internal use before its full-scale production deployment. Access to the staging environment is typically limited to specific individuals like stakeholders, sponsors, testers, or developers working on imminent production patches. This environment’s closeness to the actual production setting makes it the go-to for urgent fixes, which, once tested here, can swiftly be promoted to production.

Production Environment

The production environment refers to the final and live phase providing end-user access. This setup includes hardware and software components like databases, servers, APIs, and other external services, all scaled for real-world use. The infrastructure in the production environment must be prepared to handle large volumes of traffic, cyber threats, or hardware malfunctions.

Other Environments

The specific needs of an application, the scale of the project, or business requirements may necessitate the introduction of additional environments. Some of the more common ones include:

  • Performance Environment: Dedicated to gauging the application’s efficiency and response times.
  • Security Testing Environment: The primary focus is to assess the application’s resilience to vulnerabilities and threats.
  • Alpha/Beta Testing Environments: These are preliminary versions of the application made available to a restricted group of users for early feedback.
  • Feature Environments: New functionalities can be evaluated in a standalone domain before being incorporated into the primary integration environment.

Summary

The software development process requires a series of specialized environments tailored to different stages of its lifecycle. The number and nature of these environments can vary based on the size and requirements of the project. For example, in some cases, to optimize workflows, the integration and testing environments might be combined into one, providing a unified platform for both merging code and conducting initial tests.

While performance-focused environments hold their importance, with the proper monitoring tools, the production environment can occasionally negate the need for a separate performance environment.

In conclusion, the software development environment isn’t a one-size-fits-all approach. It demands careful planning and customization to fit a project’s specific goals and needs. Making the right choices in setting up these environments is critical to ensuring a smooth journey from idea to launch, ultimately delivering top-notch applications.

Author

Róbert Ďurčanský
Senior Fullstack Developer

Róbert is a highly skilled Senior Fullstack Developer with over 15 years of experience in the software development industry. With a strong background in back-end and front-end development and UX&Graphics and a passion for delivering high-quality solutions, Róbert has proven expertise in a wide range of technologies and frameworks. He is proficient in TypeScript, Angular, Java, Spring Boot, Kotlin, and AWS Cloud Solutions, among others. Throughout his career, Robert has worked on various projects, including e-commerce platforms, financial systems, and game development.

The entire Grow2FIT consulting team: Our team

Related services

Team of professionals

Back to all news

Reference: Raiffeisen Bank International – Designing a Digital Bank’s Data Architecture

Raiffeisen Bank International (RBI), a prominent banking group, was on the journey of launching its new digital banking platform. With the rapid digitization of banking services and the increasing demand for seamless online customer experiences, RBI recognized the imperative need for a robust and adaptable data architecture. While the bank had in-house teams proficient in traditional banking systems, they sought external expertise to harness the full potential of contemporary cloud technologies.

The Problem

RBI’s vision of its digital bank was modern, agile, and future-ready. The challenge was twofold:

  • Designing a data architecture that would be scalable, efficient, and capable of handling the vast influx of digital transactions.
  • Ensuring that the architecture, while modern, would remain compliant with internal and external regulations and seamlessly integrate with RBI’s existing systems.

Our Solution

Our specialized team of Data Consultants delved into the project with a two-pronged approach:

  • Serverless and Cloud-Agnostic Architecture: Our design principles prioritized a serverless framework on AWS. This not only ensured automatic scalability without the overhead of managing servers but also brought down operational costs. Moreover, by designing the architecture to be cloud-agnostic, we ensured that RBI would not be tethered to a single cloud provider, granting them flexibility and resilience in their digital endeavours.
  • Integration and Compliance: Acknowledging the paramount importance of security and regulation in the banking sector, our solution was meticulously tailored. We:
    • Conducted a comprehensive Requirements Analysis to ascertain the bank’s needs and align our design accordingly.
    • Crafted the Data Architecture and Data Processing blueprint utilizing a suite of cloud-agnostic services, ensuring optimal data flow, storage, and retrieval mechanisms.
    • Ensured Internal Regulation Compliance by integrating the architecture with RBI’s internal environment, embedding requisite security measures, and devising a robust security concept.

Outcome

With our intervention, Raiffeisen Bank International now boasts a state-of-the-art digital banking data architecture that stands as a beacon of efficiency, resilience, and adaptability. The bank is poised to deliver unmatched digital banking experiences to its customers while staying ahead of the curve in the rapidly evolving fintech landscape.

Provided services

Key Technologies

  • AWS

Team of professionals

Back to all news

Welcome Marián Ivančo: Software Architect with 20+ Years of Experience

We are pleased to announce Marián Ivančo has joined our team. With over 20 years in the field, Marián has extensive experience in designing and implementing complex IT systems. His work has covered a range of sectors, including finance, gaming, and energy.

Marián is adept at migrating from legacy systems to modern container solutions. His technical expertise includes Java, Kubernetes, cloud solutions, and container platforms. Throughout his career, he’s played pivotal roles in large-scale system integrations and migrations.

We’re looking forward to Marián’s contributions and the wealth of experience he brings to our team.

Check our other Senior Consultants here

Team of professionals

Back to all news

Our Summer Teambuilding Adventure Was a Splash!

🌊☀️ Had an absolute blast at our summer teambuilding event! 🚣‍♂️🏄‍♀️

Check out this video for a sneak peek of our adventurous day filled with rafting, surfing, and more! 💦
Grateful for a team that knows how to work hard and play hard. 💪😄

Team of professionals

Back to all news

Reference: Atlas Group – Monitoring, Support, and Infrastructure Development

Atlas Group is a technology-driven organization that relies on Kubernetes for its infrastructure. They sought assistance in monitoring, support, and problem-solving in their Kubernetes environment. Additionally, they required help in setting up a distributed block-based storage solution based on LINSTOR to provide persistent volumes for their pods or NFS storage.

Solution

Grow2FIT, a service provider specializing in Kubernetes and infrastructure management, partnered with Atlas Group to address their needs. The following services were provided:

  • Monitoring and Support
    • Implemented a monitoring system to identify issues or anomalies in the Kubernetes environment proactively.
    • Established a support mechanism to address and resolve problems encountered by the Grow2FIT team promptly.
    • Responded to requests for assistance regarding Kubernetes and other related technologies.
  • Problem Solving and Consultation
    • Provided consultation services to Atlas Group, offering expertise and guidance in problem-solving and troubleshooting within the Kubernetes ecosystem.
  • Infrastructure Development
    • Upgrading Kubernetes to newer versions, ensuring smooth transitions and minimizing disruptions.
    • Engaged in ongoing maintenance and problem resolution related to Kubernetes and other infrastructure components.
  • Distributed Block-based Storage (LINSTOR)
    • Assisted Atlas Group in setting up a distributed block-based storage solution based on LINSTOR.
    • Configured LINSTOR to provide persistent volumes for their pods, enabling data persistence and reliability.
    • Integrated NFS storage into the infrastructure, leveraging LINSTOR’s capabilities to enhance storage capabilities.

Result

  • Swift identification and resolution of issues through proactive monitoring and responsive support.
  • Successful implementation of LINSTOR, providing reliable and persistent volumes for their pods.
  • A collaborative partnership between Atlas Group and Grow2FIT ensured ongoing support and consultation, enabling their infrastructure’s seamless development and enhancement.

Provided services

Key Technologies

  • Kubernetes
  • LINSTOR

Contact Person

Tomáš Řehák, Head of Engineering

Team of professionals

Back to all news

We’ve moved to The Spot Bratislava

We are thrilled to announce our move to a new office in The Spot Bratislava! Our fresh and inspiring workspace is all about growth, innovation, and collaboration. Come visit us, enjoy a cup of coffee, and see our new environment for yourself!

 

 

Team of professionals

Back to all news

Meet Róbert Ďurčanský: A Highly Skilled Senior Fullstack Developer

We are delighted to introduce Róbert Ďurčanský, a seasoned Senior Fullstack Developer with over 15 years of experience in the software development industry. Róbert brings a wealth of expertise in back-end and front-end development, UX&Graphics, and a passion for delivering high-quality solutions.

With proficiency in technologies like TypeScript, Angular, Java, Spring Boot, Kotlin, and AWS Cloud Solutions, Róbert is well-versed in a wide range of frameworks and tools. His technical prowess enables him to adapt to evolving technologies, ensuring efficient and innovative solutions for complex projects.

Throughout his career, Róbert has contributed to diverse projects, including e-commerce platforms, financial systems, and game development. His ability to easily tackle challenges and meticulous attention to detail have consistently delivered remarkable results.

Check our other Senior Consultants here

Team of professionals

Back to all news

Case study: Teradata to Snowflake migration for a large retailer

Customer situation

The customer is a leading FTSE 100 UK-based retailer operating a large (approx 300 TB, 10.000+ tables, 100.000+ columns, 30.000.000+ new transactions per day) data warehouse on the Teradata platform. Reports and data from it were used primarily by the finance department and many other teams to manage their performance and input into various analytics tools.

The customer is ongoing a transformation onto a strategic platform. Snowflake was selected as the best-fitting, most performant solution. This strategic platform is also coupled with a completely new data modeling approach according to the Data Vault 2.0 standard. But as this is a long-term project, it was necessary to find an interim solution to address issues (low performance, expensive to operate) of the current Teradata DWH as soon as possible.

Selected approach

For the interim solution, our UK partner company LEIT DATA, selected migration of existing Teradata DB into Snowflake. We decided to keep the current data model to retain the backward compatibility of reports and integrations and to be as quick and efficient as possible. This enabled us to maintain existing reporting tools (e.g., SAP BusinessObjects) with only a minimum tweak. The strategic project also includes a new reporting solution (PowerBI) successfully integrated with the new Snowflake DB.

The Teradata ingestion pipeline consisted of many stored procedures run by various triggers. This solution was replaced with a more maintainable set of Python scripts ingesting data from S3 batch files already generated for the Teradata solution.

We also found that the current Teradata security model wasn’t manageable and unscalable as it consisted of more than 1 million individual SQL statements (“GRANT”s). We implemented a new security model leveraging the native Snowflake data classification model. This enabled the customer to control access to columns and tables containing sensitive PII data efficiently.

The migration took a team of approx—10 people over 1.5 years. Much effort was spent on extensive testing to ensure that the reports were accurate “to the penny.”


High-level architecture transition

Benefits

Data Warehouse migration to Snowflake enabled the customer to decommission legacy Teradata platform support and maintenance costs and eliminate a dedicated team of 8 Teradata support contractors, replaced by a smaller permanent Data Engineering squad focusing on strategic data value products.

This alone resulted in multi-million £ per year‎ savings. Snowflake made us rethink how the team delivered Data Products and optimized team effectiveness. This led to significant optimization in decreased time-to-market from 3+ months to less than four weeks.

Snowflake Data federation allows easy data sharing of migrated database (in legacy format) with the new strategic data warehouse (in Data Vault format). This accelerated the migration path to the strategic data platform.

It also had these additional benefits:

  • Orders of magnitude speed up report generation and data processing.
  • An easily manageable, scalable, and auditable security model ensures full GDPR and PII protection compliance.
  • Reduced complexity for data visualization, science, and analytics communities within the organization increases productivity.

Lessons learned

Here are key issues we came across during the project and lessons learned from them:

  • Large volume data egress from the Teradata platform seems throttled on the hardware level. Export ran extremely slow (300 TB took a month to export), and we investigated every other issue (network stack, landing zone, etc.). We concluded that the Teradata platform itself causes the root cause.
  • Teradata platform has strange behavior of decimals rounding. This was further augmented by the lousy design of the original data model (use of float instead of decimal for storing financial data). This led to different results when reconciling and cross-checking reports from Teradata vs. Snowflake. Each such discrepancy had to be investigated fully, resulting in a lengthy testing period.
  • Some companies provide services for out-of-support Teradata infrastructure (e.g., replacing failed disks). They may be interested in buying out existing systems after migration.
  • As part of any large-scale data migration, it is suggested to review all existing reports to see which ones are not used at all or very sporadically. This can be done by reviewing report access logs or replacing reports with unclear users or usage with a static text to contact the migration team. The goal is to eliminate legacy reports and thus reduce overall testing efforts.

Contact us to get started

Our team participated in critical architecture design and delivery management roles. Contact us for a free assessment session where we will, together with your data leadership team, evaluate the potential for savings and enhance your agility to deliver data-value products.

Related services

Team of professionals

Back to all news

Meet Our New Colleague: Welcoming Pali Jasem to Our Team

We are happy to introduce you to our new colleague, Pali Jasem, an experienced professional with over 20 years in IT and consulting. Pali’s experience spans a wide range of areas, including data processing, artificial intelligence, business analytics, knowledge discovery, UX/CX, solution architecture, and IT product management.

Before joining Grow2FIT, he held the position of CTO at GymBeam, where he helped grow the company and build the IT team. Previously he worked for companies such as Pelican Travel, Solar Turbines San Diego, Seznam Prague, and other tech start-ups and corporations.

He is currently working as a business architect on a web applications development project for our client Solargis, where he applies his experience in business analysis and architecture. We are happy that Pali will expand our expert team and wish him many personal and professional successes.

Team of professionals

Back to all news

What is Data Mesh and why do I care? – Part III.

In the first part of our series on Data Mesh, we introduced the concept and principles of Data Mesh. In the second part of the series, we looked at the technology enablers of introducing the Data Mesh idea to your organization and typical objections to Data Mesh. In this final part of the series, we will introduce the plan of how to start with Data Mesh in your organization.

How do we start with Data Mesh?

1. Assess organizational data maturity, pain points, and plans

Do a quick assessment to measure organizational maturity in data areas such as:

  • Data modeling – what modeling standards are used, how are models reviewed, what tools are used and how they are integrated, what artifacts are generated from models…
  • DataOps – state of CI/CD for data flows, batch jobs monitoring, logging analysis and reporting, data quality monitoring, infrastructure monitoring, and scaling…
  • Data Security – how are defined and enforced security policies, how are data classified and how is the classification retained during transformation processes, data lineage analysis, how is users identity and role identified and managed…
  • AI/ML Review – how (if at all) is AI/ML used within an organization, what datasets are required for model training, what data are produced…

Part of the assessment is to also capture the current data stack, and identify potential risks and pain points (legacy tools, expensive licenses preventing wider tools usage, performance or stability issues, etc.).
The assessment should also gather long-term strategy and key ongoing or short-term planned business projects that either impact the data area or require critical data inputs. Such assessment typically takes 4-6 weeks.

2. Plan tactical and strategic data stack and activities

Based on the assessment and gathered inputs prepare:

  • Data platform strategy – describe at a high level outlines how should the data platform operations, capabilities, what are key interactions with other organization’s projects
  • Tactical (next 3 months) and strategic (1-2 years) data stack – what tools should be used, what deprecated, how should they be integrated together and with other systems
  • Domain model – prepare the initial data domain model (L1) and break it into sub-domains (L2) where possible. We suggest using organization structure and IT systems architecture for the initial domain split. In other words – leverage “Conway’s law” rather than trying to fight it.
  • Define governance model – data platform governance structure (incl. mapping of domains onto an org chart, outlining key roles and processes to define and approve Data Products, security rules, monitor operations, audit data accesses, etc.

This activity should take 2-4 weeks including review and approval by key stakeholders.

3. Identify pilot and staff pilot team

Select domain and pilot Data Products (and reports) that should be constructed. Allocate necessary team – prefer fully dedicated allocation where possible to ensure the team’s full focus. Part of the pilot is typically also a validation of new technologies and tooling. For those make sure that appropriate support from IT Operations is committed – to providing necessary installation support, network (firewalls, etc.) setup, access to source data systems, credentials provisioning, etc. Account ample time to resolve and stabilize each tool before allocating users to use it.
The goal is to deliver pilot Data Products and Reports within 3 months (ideally 2 months – depending on lead time for new tooling setup, if any).

4. Evaluate and scale

Evaluate issues met during pilot delivery – especially focus on classifying if the issues are once-off (due to new methodology, tooling, and/or team) or have a more fundamental root cause that needs to be addressed.
Decide on the next data domains and outline the initial set of new Data Products for the build. Communicate project and Data Mesh concept to a wider audience, especially where users can find new Data Product Catalog and relevant reports and provide a contact for the expert team.
The critical step is to establish an in-house “black belts factory” – a program to train the trainers who can then support the Data Mesh rollout organization-wide.

Start now!

We are providing our customers with the necessary knowledge, training, assets, and resources to quickly start the Data Mesh journey.
Our services typically consist of:

  • Quick 2-days focused pre-assessment on identifying key focus areas and areas where Data Mesh can bring the most business value.
  • Run a 2-3 days “data hackathon” integrating with real systems and proposed tooling to demonstrate their feasibility and efficiency.
  • Driving data assessment, presenting outcomes, and proposing plans to C-level stakeholders to gain wide support for Data Mesh roll-out.
  • Design tactical and strategic data architecture, recommended data stack, prepare guidelines (modeling methodology, ingestion patterns, CI/CD pipelines, etc.), and set up architectural ceremonies (data forum, architecture approval committee, etc.).
  • Provide resources to lead and deliver the Data Mesh pilot where the organization can’t sufficiently quickly set up internal staff.
  • Set up and run a training program for internal teams to ensure organizational self-sufficiency and keep key know-how “in-house”.

Contact us please to learn more

Author

Miloš Molnár
Grow2FIT BigData Consultant

Miloš has more than ten years of experience designing and implementing BigData solutions in both cloud and on-premise environments. He focuses on distributed systems, data processing, and data science using the Hadoop tech stack and in the cloud (AWS, Azure). Together with the team, Miloš delivered many batch and streaming data processing applications.
He is experienced in providing solutions for enterprise clients and start-ups. He follows transparent architecture principles, cost-effectiveness, and sustainability within a specific client’s environment. It is aligned with enterprise strategy and related business architecture.

The entire Grow2FIT consulting team: Our team

Related services