“Many IT teams are already well versed in DevOps and have the tools and practices in place, but database development can sometimes be left behind. It is not included because it’s regarded as too complicated and viewed as a siloed activity that only comes in at the end of the process,” says Jakub Lamik, Chief Product Officer and Board member, Redgate Software, in an exclusive interview with EnterpriseTalk.
ET Bureau: What are the database challenges IT departments are facing in the post-pandemic era?
Jakub Lamik: The main challenges are monitoring database estates remotely; migrating to the cloud; and standardizing development practices, particularly in multi-database environments.
The pandemic has forced these challenges up the agenda and the signs suggest that the shift to remote working, probably on a hybrid basis, will continue. This makes monitoring databases in a secure and efficient way harder as database administrators may not always be in the same location yet still need to be alerted to problems and performance issues.
Businesses have also recognized that migrating to the cloud simplifies innovation and enables them to release new services and features faster. As a result, IT professionals now have to monitor and manage disparate estates with some servers on-premises, and others on cloud platforms like Microsoft Azure, Amazon RDS, and Google Cloud. This is prompting many businesses to look at third-party monitoring solutions because manual monitoring across platforms is now becoming too complex and time-consuming.
Then there are multi-database environments. SQL Server and Oracle are still the workhorses of the database world but we’re seeing development teams introducing open-source and cloud-first databases alongside them for new projects where it’s appropriate. This presents its own challenges in terms of having similar processes and tool chains across teams and technologies.
ET Bureau: What steps can enterprises take to deliver software fast while protecting business-critical data?
Jakub Lamik: This has been one of the main challenges for a while as developers often prefer to use a version of the production database to test their proposed changes to ensure the changes won’t cause a problem when they’re released. However, it also raises serious security concerns because production databases very often contain sensitive data that needs to be protected.
The problem is often worked around by using a version of the production database with a limited dataset of anonymous data to develop and test against. However, the results are neither realistic nor of a size where the impact on performance can be assessed.
An alternative is to take a copy of the production database and mask the data manually by replacing columns with similar but generic data. This copy can then be used in development and testing but will age very quickly as ongoing changes are deployed to the production database.
The best option, and the one which is typically seen in the Financial and Healthcare sectors, is to use data masking measures like pseudonymization, encryption, anonymization and aggregation.
This often requires using a third-party tool to streamline the process, and it will help to speed up development while keeping data absolutely safe. Many businesses are also combining it with data virtualization technology which can provision the masked database copies in seconds – something that previously took a few hours. These datasets also use a fraction of the storage of the original, so in many cases, it also becomes cheaper.
ET Bureau: How can enterprises unlock agility and optimize performance across the entire software development lifecycle?
Jakub Lamik: The key is to adopt DevOps in both application and database development. As the word suggests, DevOps breaks down the walls that can exist between development and operations teams, prompts them to work in the same way, and looks to streamline and automate as many processes as possible. Instead of aiming for big releases with long development cycles, it encourages frequent, smaller releases that deliver value to customers faster.
Many IT teams are already well versed in DevOps and have the tools and practices in place, but database development can sometimes be left behind. It is not included because it’s regarded as too complicated and viewed as a siloed activity that only comes in at the end of the process.
However, there are now proven and mature practices and processes that mean the database can be developed alongside applications, often integrating with the same tools. This speeds up the software development lifecycle and stops the database from slowing down the process or becoming a problem at the end of the development pipeline.
ET Bureau: How can enterprises automate database deployments across teams and technologies?
Jakub Lamik: This follows on from the previous point on DevOps practices. By standardizing the way databases are developed, it is possible to deploy similar practices and processes across teams and technologies. Teams typically start version controlling database code and automating testing so that errors are picked up earlier in the development pipeline.
These processes streamline development, improve the quality of code, and mean that database deployments, or migrations, can then also be automated. There are tools available that work across databases and platforms so that multiple teams can manage their deployments in the same way.
Jakub Lamik is Chief Product Officer and Board Member at Redgate Software, responsible for the product strategy and roadmap. Before joining Redgate he was a Vice President of Product at ARM, looking after portfolio software and hardware products and solutions for Mobile, Home, VR/AR, Security and AI/ML use cases. Prior to working at ARM, he held senior management positions in numerous technology companies.