Google breaks AI performance records in MLPerf with world’s fastest training supercomputer

Naveen Kumar – Google AI

Fast training of machine learning (ML) models is critical for research and engineering teams that deliver new products, services, and research breakthroughs that were previously out of reach. Here at Google, recent ML-enabled advances have included more helpful search results and a single ML model that can translate 100 different languages.

The latest results from the industry-standard MLPerf benchmark competition demonstrate that Google has built the world’s fastest ML training supercomputer. Using this supercomputer, as well as our latest Tensor Processing Unit (TPU) chip, Google set performance records in six out of eight MLPerf benchmarks.

Speedup_of_Googles_best_M.0860040916660770.png

Figure 1: Speedup of Google’s best MLPerf Training v0.7 Research submission over the fastest non-Google submission in any availability category. Comparisons are normalized by overall training time regardless of system size, which ranges from 8 to 4096 chips. Taller bars are better.1

We achieved these results with ML model implementations in TensorFlowJAX, and LingvoFour of the eight models were trained from scratch in under 30 seconds. To put that in perspective, consider that in 2015, it took more than three weeks to train one of these models on the most advanced hardware accelerator available. Google’s latest TPU supercomputer can train the same model almost five orders of magnitude faster just five years later.

In this blog post we’ll look at some of the details of the competition, how our submissions achieve such high performance, and what it all means for your model training speed.

MLPerf models at-a-glance

MLPerf models are chosen to be representative of cutting-edge machine learning workloads that are common throughout industry and academia. Here’s a little more detail on each MLPerf model in the figure above:

  • DLRM represents ranking and recommendation models that are core to online businesses from media to travel to e-commerce
  • Transformer is the foundation of a wave of recent advances in natural language processing, including BERT
  • BERT enabled Google Search’s “biggest leap forward in the past five years” 
  • ResNet-50 is a widely used model for image classification
  • SSD is an object detection model that’s lightweight enough to run on mobile devices
  • Mask R-CNN is a widely used image segmentation model that can be used in autonomous navigation, medical imaging, and other domains (you can experiment with it in Colab)

In addition to the industry-leading results at maximum scale above, Google also provided MLPerf submissions using TensorFlow on Google Cloud Platform that are ready for enterprises to use today. You can read more about those submissions in this accompanying blog post.

The world’s fastest ML training supercomputer

world’s fastest ML training supercomputer.jpg

The supercomputer Google used for this MLPerf Training round is four times larger than the Cloud TPU v3 Pod that set three records in the previous competition. The system includes 4096 TPU v3 chips and hundreds of CPU host machines, all connected via an ultra-fast, ultra-large-scale custom interconnect. In total, this system delivers over 430 PFLOPs of peak performance.

MLPerf submissions trained from scratch.jpg

Table 1: All of these MLPerf submissions trained from scratch in 33 seconds or faster on Google’s new ML supercomputer.2

Training at scale with TensorFlow, JAX, Lingvo, and XLA

Training complex ML models using thousands of TPU chips required a combination of algorithmic techniques and optimizations in TensorFlowJAXLingvo, and XLA. To provide some background, XLA is the underlying compiler technology that powers all of Google’s MLPerf submissions, TensorFlow is Google’s end-to-end open-source machine learning framework, Lingvo is a high level framework for sequence models built using TensorFlow, and JAX is a new research-focused framework based on composable function transformations. The record-setting scale above relied on model parallelism, scaled batch normalization, efficient computational graph launches, and tree-based weight initialization. 

All of the TensorFlow, JAX, and Lingvo submissions in the table above—implementations of ResNet-50, BERT, SSD, and Transformer—trained on 2048 or 4096 TPU chips in under 33 seconds each.

TPU v4: Google’s fourth-generation Tensor Processing Unit chip

Google’s fourth-generation TPU ASIC offers more than double the matrix multiplication TFLOPs of TPU v3, a significant boost in memory bandwidth, and advances in interconnect technology. Google’s TPU v4 MLPerf submissions take advantage of these new hardware features with complementary compiler and modeling advances. The results demonstrate an average improvement of 2.7 times over TPU v3 performance at a similar scale in the last MLPerf Training competition. Stay tuned, more information on TPU v4 is coming soon

tpu_v4_speedups_over_tpu_v3.png

Figure 2: TPU v4 results in Google’s MLPerf Training v0.7 Research submission show an average improvement of 2.7 times over comparable TPU v3 results from Google’s MLPerf Training v0.6 Available submission at the identical scale of 64 chips. Improvements are due to hardware innovations in TPU v4 as well as software improvements.3

Rapid, ongoing progress

Google’s MLPerf Training v0.7 submissions demonstrate our commitment to advancing machine learning research and engineering at scale and delivering those advances to users through open-source software, Google’s products, and Google Cloud.

You can use Google’s second-generation and third-generation TPU supercomputers in Google Cloud today. Please visit the Cloud TPU homepage and documentation to learn more. Cloud TPUs support TensorFlow and PyTorch, and a JAX Cloud TPU Preview is also available.


1. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Chart compares results: 0.7-70 v. 0.7-17, 0.7-66 v. 0.7-31, 0.7-68 v. 0.7-39, 0.7-68 v. 0.7-34, 0.7-66 v. 0.7-38, 0.7-67 v. 0.7-29.
2. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Table shows results: 0.7-68, 0.7-66, 0.7-68, 0.7-66, 0.7-68, 0.7-65, 0.7-68, 0.7-66.
3. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Figure compares results 0.7-70 v. 0.6-2.

Original em: https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

by Japjot Walia and Jonathan Shapiro-Ward

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMakerAmazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

Original em: https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

How Oracle Cloud Customers Can Turn GDPR and CCPA Into a Business Advantage


Saswata Basu

CEO AND FOUNDER OF 0CHAIN

Enterprises face data protection and privacy liability issues based on new General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) regulations, which carry fines of up to $25M. Both regulations empower consumers with key data protection rights, including knowing how their data is being used, a right to access, and a right to opt-out of having their data sold to third parties. In light of these new protections, a staggering €500M has been fined since GDPR enforcement began in mid-2018. This problem is growing, and enterprises need a simple, automated solution.

Current Solutions

Today, several software management solutions have designed workflows to manage compliance, such as awareness, data mapping, consumer request fulfillment, cookie management, vendor risk, and incidence reports. These great tools help manage the requirement and avert fines. However, these processes typically require several people to manage the effort and are difficult to scale when customer requests and deletion rates grow exponentially. Fundamentally, the customer does not achieve real privacy and transparency, and the company is still liable for privacy breaches, regardless of compliance requirements.

A New Approach

For organizations to truly solve the privacy issue and achieve zero-liability, they must give full ownership and control of data to the customer and provide transparency of data activities. The following architecture allows you to own a storage allocation on a trusted platform to upload their data, and then share an encrypted link to the company. The business then uses this auth token to download a copy for their application, without any change to their current IT processes. In this three-step process, where you have the following capabilities:

  • Owning data and can upload, updated, delete encrypted data.
  • Sharing the authentication token to the company.
  • Allowing the company to download their data.
A graphic depicting the three-step architecture for user storage, allocation, and sharing of encrypted data.

This simple three-step process is recorded and displayed transparently to customers, without the need for complex processes that can’t scale with customer requests for access and deletion of their data. The responsibility lies with the customer, and not the company.

Shifting Liability

With user ownership, control of data, and a transparent process of securely sharing it to the company, the onus of privacy protection lies with the customer. For the hacker, it’s difficult to attack a lot of customers, because they would need to steal keys individually.

With the liability shifted to you, organizations must ensure that the data is well protected, that all activities are recorded on the ledger, and prevent any potential for breach.

Preventing Breach

Copies are inherently vulnerable because a hacker needs to attack only the most vulnerable server to get access to all customer data. One approach to prevent this attack mode is to split the data into multiple servers with different keys. Now, the attacker needs all the keys to get access. The implementation of this architecture is fairly simple as the server access keys can be distributed among teams and individuals within the organization.

A graphic depicting the separation of server access keys.

The Business Advantage

0Chain is deployable directly from Oracle Cloud Marketplace. It provides automation, liability and breach protection, immutability, transparency, and a trusted platform for the Oracle Cloud Infrastructure customers to not only abide by GDPR and CCPA regulations, but also use them to their business advantage. With 0Chain, organizations can brand themselves as a leader in privacy, gain higher usage, add revenue, and differentiate their products in the market.

You can deploy 0Chain on-premises, through cloud, or in hybrid environments. The user interface automatically creates a key and allocation and stores the key based on your password. So, only you can access you key and data. This key is registered on the platform’s blockchain and any action of uploading a new file (such as a post, image, or video), updating an existing file (such as user profile data), and deleting them are recorded on the ledger.

The files are automatically shared to the business through an auth token. So, only the business can decrypt and use it for their operations. Each time the company uses customer data, they need to make a note of the metadata on the blockchain, creating a transparent audit trail for you to instantly visualize it and have complete trust in the organization. The company can handle this process as a parallel batch operation offline, without the need to block existing inline data used by the company for their daily business activities.

Migrating customer data to this trusted platform requires a simple, phased approach. In the first phase, only new data is sent to the platform. In the second phase, older data is migrated. In subsequent phases, granular datasets can be implemented to have a higher level of precision permission settings that the company can offer the customer, perhaps with a fee to cover for development costs.

FAQ

  • How do you achieve zero liability?
    Since you own and controls the data, the company is not liable. You provide explicit permission through signed transactions, which can’t be disputed since their action is recorded on the blockchain and is immutable—something that a company can’t go back and change in their database records.
  • How do you make breach impossible?
    Since the file is split into multiple servers, a hacker needs to have keys to all of them to gain access. This configurable feature, with each server split, makes it more difficult to get hold of the keys, as long as they’re distributed across individuals and teams.
  • Does 0Chain replace the current privacy software tools?
    No. 0Chain helps you protect your data better and shift liability back to you. It can handle billions of customer privacy requests and provides instant compliance reports from ledger transactions, using integrated search tool for specific files, users, and activities that anyone can access. Through these efforts, 0Chain provides an open, trusted compliance platform.
  • How do you ensure that the enterprise aligns with your consented dataset?
    Whenever a dataset is used by the enterprise, they send a signed transaction with the metadata of the content to record the activity. If the company misuses the dataset, you can mount a challenge based on the exposed data and the consent that’s recorded in the ledger.

Want to Know More?

To learn more, visit our website and trial our product for free through Oracle Cloud Marketplace.

Original em: https://blogs.oracle.com/cloud-infrastructure/how-oracle-cloud-customers-can-turn-gdpr-and-ccpa-into-a-business-advantage

How Oracle is Helping You Maintain a Strong Security Posture in the Cloud


Paul Toal

DISTINGUISHED SOLUTION ENGINEER – CYBER SECURITY

This is a syndicated post, view the original post here

So, you’ve just signed up to a shiny new cloud provider. It’s exciting when you realise that you not only have an almost unlimited supply of Infrastructure-as-a-Service (IaaS) at your fingertips, but you also have a plethora of various platform services just waiting for you to use. However, before you get carried away spinning up compute and uploading your files into storage, you need to realise that you have a shared responsibility for security, as shown in Figure 1.

Figure 1 – Security is not just the job of the Cloud Service Provider

Sure, the Cloud Service Provider (CSP) has a set of security responsibilities, but so do you. At a minimum, irrespective of whether you are using Software, Platform, or Infrastructure as-a-service, you will always be responsible for your data, your users, and to some extent, your configuration. As you move away from SaaS towards IaaS, your responsibilities grow as you become responsible for software, operating systems, patching etc.

It seems that, whilst we have talked for a long time as an industry about the cloud security shared responsibility model, there is still plenty of confusion out there. The two statistics in Figure 1 come from the Oracle and KPMG Cloud Threat Report 2019, and reviewing the figures from the recently released report for 2020, the situation isn’t any better. Only 8% of this year’s respondents stated that they fully understand the cloud security shared responsibility model. I’ve discussed this topic before when looking at how “Security must be easier and not just for the experts”.

In this article, I want to look at Cloud Security Posture Management (CSPM) and some of the use cases that come to mind, as well as those that I am hearing from customers. I’ll discuss a number of use cases, why they are important, and how Oracle Cloud Infrastructure (OCI) is helping you to meet and address your shared responsibilities. So, if you are ready, let’s get started.

Before we can look at use cases, we need to understand what we mean by Cloud Security Posture Management. Simply put, it is looking at how you ensure that your cloud environment is configured in a secure manner, that it remains secure over time, and that configuration changes or activities don’t weaken that posture. Gaining that secure position, never mind maintaining it can be difficult, due to a number of factors, including:

  1. Larger exposure due to incredible rate of growth of cloud resources
  2. More Cloud services mean more complexity and more settings to manage
  3. Fewer experts caused by a large IT security skills shortage

Let’s examine some use cases that we need to address with CSPM and then we’ll discuss how Oracle can help you to meet your security responsibilities in this area.

Use Case 1 – Data exposure through public buckets

No doubt you will have seen plenty of data breach stories in the media where sensitive data was found on object storage buckets that had public visibility. There may be valid use cases where a bucket should be public, however this should be on a very tightly controlled exception basis.

In OCI, it’s difficult to create a public object storage bucket by mistake. First you create the bucket, then you change its visibility to make it public.

This may be an intentional change in visibility of the bucket, but this also could have been a temporary change, such as for debugging or testing.

While the bucket is public and if it contains sensitive data, you are at risk of a data breach. The attacker just needs to find the bucket on the internet, and believe me, there are plenty of people looking for them.

Use Case 2 – Ensuring only approved OS types and versions are used for compute
Another common use case is looking at the images that should be used when creating new compute instances. OCI provides a wide variety of images that can be used, including:

  • Platform images, e.g. Windows Server, Ubuntu, Linux etc.
  • Oracle Images, e.g. E-Business Suite, Enterprise Manager etc.
  • Partner Images, e.g. Next generation firewalls, GPU machines etc.

You can also bring your own images as well as using existing boot volumes. Now, imagine you have a standard, approved OS type that your compute instances must be built with, or you have a custom image that you want to ensure is used as a gold build for all instances. This image may have your corporate standard IT tools on it such as anti-virus, and corporately approved packages. It may also have a number of services removed or hardening policies applied.

As part of enforcing your security policy, you need to ensure that all compute instances are using the approved OS types and versions, or are using your gold build images.

This means you need to identify any compute instances that don’t use the approved images. Furthermore, you may also want to automatically shutdown any instances violating that policy, or even terminate them. In some cases, you may also disable the account of the administrator who is creating these non-approved compute instances.

Use Case 3 – Adding internet-based routes to your Virtual Cloud Network
The next use case is addressing network access to your cloud environment. One common design pattern for cloud deployments is as an expansion to your existing data centre. In these cases, it is common for a VPN or private connection (called FastConnect in OCI) to be deployed between your data centre and your cloud environment. All access to those cloud services are directed down this connection and there is no direct access over the internet.

Now, let’s take the scenario where a network administrator makes a change to your virtual cloud network (VCN). They add an internet gateway and change the routing rules for this new gateway. In our use case here, neither of those actions should be performed on this particular VCN. It might be that the administrator has changed the wrong VCN by mistake, or it could be something more nefarious. Either way, the change needs to be identified quickly and fixed to ensure that any the security risk is minimised.

Use Case 4 – Key Rotation
For our final use case, let’s think about key management. Cryptographic keys are used in lots of places, whether as the basis for in-transit encryption, or for encryption at rest. Many organisations have IT security policies governing the lifecycle and use of keys, including how often keys must be changed.

Within OCI, Oracle will manage keys for you where you have no policy stipulating that you must manage your own. Any time you create a storage device (e.g. object storage bucket, boot volume, block volume, file storage), then it will be encrypted with an Oracle-managed key. However, we also provide you the ability to manage your own keys, through OCI Vault, a service backed by highly-available FIPS 140-2 Level 3 Hardware Security Modules (HSMs).

If you do choose to manage your own keys, you will likely need to rotate them periodically to ensure the amount of data encrypted by any one key is not too great.

Therefore, being able to identify keys that you manage that haven’t been rotated in-line with your security policy is important. Even better would be automatically rotating those keys to help ensure you’re meeting all of your regulatory compliance needs and industry best practices.

Now, we’ve talked about a number of use cases, we understand the problem. So, how is Oracle helping in this area?

Back at Oracle OpenWorld 2019, our vision for OCI security was announced and it was focused on making security easier, more automated, and always-on. To deliver that vision, a number of capabilities were announced including Oracle Cloud Guard. Some of the key design principles of Cloud Guard include:

One of the most interesting design principles is our use of Embedded Expertise. What this means is that Oracle knows OCI best. We know what security controls are available and how best to apply them at scale. We also know what problems to look for and how to apply security features to mitigate those problems. By applying all of our own embedded expertise we are taking the burden away from you and removing the need for you to build all of these policies yourself.

Let’s take one of our use cases above as an example and look at how Cloud Guard would address a security risk such as a public bucket.

Within Cloud Guard, Oracle uses our own embedded expertise to create out-of-the-box rules to identify common problems, including, in this case, the detection of any buckets that are public. Of course, you can tune the rules to add various conditions. For example, in this particular rule, we can tune it to exclude any buckets that are authorised to be public.

In my scenario, I have created two buckets and made them both public. As my very inventive names suggests, one of my buckets is allowed to be public and therefore shouldn’t trigger any alerts.

Within Cloud Guard I am using the out-of-the-box rules but have tuned the detector rule for public bucket detection to exclude the bucket named authorised-public-bucket.

Now, Cloud Guard will identify any issues it finds within OCI as Problem, but we can see that in this scenario, it only identifies the unauthorised-public-bucket as a Problem.

Note also, how Cloud Guard also recognised that we are using the default, Oracle-managed keys for the encryption of these buckets rather than our own customer-managed keys through OCI-Vault.

If configured to do so, Cloud Guard can automatically remediate the problem, in this case, by changing the bucket back to private, which is has done here.

It can also send notifications. In my case, I am sending all Critical notifications to Slack, and all non-critical notifications to email. However, I could just as easily send them to PagerDuty, a custom HTTPs URL, or even call a serverless function using OCI functions ( by writing a function using the opensource fn project):

As you can see, Cloud Guard has not only identified a problem with my object storage buckets, but it has informed me about it as well as provided, automated closed-loop remediation to remove the security risk.

Cloud Guard is currently in Limited Availability, due to be released later this year. Therefore, don’t forget to refer to Oracle’s Safe Harbor statement below:

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Original em: https://blogs.oracle.com/cloud-infrastructure/how-oracle-is-helping-you-maintain-a-strong-security-posture-in-the-cloud-v2

Microsoft announces next evolution of Azure VMware Solution

Takeshi Numoto Commercial Chief Marketing Officer, Microsoft

Today, I’m excited to announce the preview of the next generation of Azure VMware Solution, designed, built, and supported by Microsoft and endorsed by VMware.

With the current economic environment, many organizations face new challenges to find rapid and cost-effective solutions that enable business stability, continuity, and resiliency. The new Azure VMware Solution empowers customers to seamlessly extend or completely migrate their existing on-premises VMware applications to Azure without the cost, effort, or risk of re-architecting applications or retooling operations. This helps our customers gain cloud efficiency and enables them to innovate at their own pace with Azure services across security, data, and artificial intelligence, as well as unified management capabilities. Customers can also save money with Windows Server and SQL Server workloads running on Azure VMware by taking advantage of Azure Hybrid Benefits.

Microsoft first party service

The new Azure VMware Solution is a first party service from Microsoft. By launching a new service that is directly owned, operated, and supported by Microsoft, we can ensure greater quality, reliability, and direct access to Azure innovation for our customers while providing you with a single point of contact for all your needs. With today’s announcement and our continued collaboration with VMware, the new Azure VMware Solution lays the foundation for our customers’ success in the future.

Sanjay Poonen, Chief Operating Officer at VMware commented, “VMware and Microsoft have a long-standing partnership and a shared heritage in supporting our customers. Now more than ever it is important we come together and help them create stability and efficiency for their businesses. The new Azure VMware Solution gives customers the ability to use the same VMware foundation in Azure as they use in their private data centers. It provides a consistent operating model that can increase business agility and resiliency, reduces costs, and enable a native developer experience for all types of applications.”

These comments were echoed by Jason Zander, Executive Vice President at Microsoft, who said, “This is an amazing milestone for Microsoft and VMware to meet our customers where they are today on their cloud journey. Azure VMware Solution is a great example of how we design Azure services to support a broad range of customer workloads. Through close collaboration with the VMware team, I’m excited that customers running VMware on-premises will be able to benefit from Azure’s highly reliable infrastructure sooner.”

The new solution is built on Azure, delivering the speed, scale, and high availability of our global infrastructure. You can provision a full VMware Cloud Foundation environment on Azure and gain compute and storage elasticity as your business needs change. Azure VMware Solution is VMware Cloud Verified, giving customers confidence they’re using the complete set of VMware capabilities, with consistency, performance, and interoperability for their VMware workloads.

Access to VMware technology and experiences

Azure VMware Solution allows you to leverage your existing investments, in VMware skills and tools. Customers can maintain operational consistency as they accelerate a move to the cloud with the use of familiar VMware technology including VMWare vSphere, HCX, NSX-T, and vSAN. Additionally, the new Azure VMware Solution has an option to add VMware HCX Enterprise, which will enable customers to further simplify their migration efforts to Azure including support for bulk live migrations. HCX also enables customers running older versions of vSphere on-premises to move to newer versions of vSphere seamlessly running on Azure VMware Solution.

Seamless Azure integration

Through integration with Azure management, security, and services, Azure VMware Solution provides the opportunity for customers to continue to build cloud competencies and modernize overtime. Customers maintain the choice to use the native VMware tools and management experiences they are familiar with, and incrementally leverage Azure capabilities as required.

As we look to meet customers where they are today, we are deeply investing in support for hybrid management scenarios, and automation that can streamline the journey. We are excited to announce more about future hybrid capabilities as they relate to Azure VMware Solution, soon.

Leverage Azure Hybrid Benefit pricing for Microsoft workloads

Take advantage of Azure as the best cloud for your Microsoft workloads running in Azure VMware Solution with unmatched pricing benefits for Windows Server and SQL Server. Azure Hybrid Benefit extends to Azure VMware Solution allowing customers with software assurance to maximize the value of existing on-premises Windows Server and SQL Server license investments when migrating or extending to Azure. In addition, Azure VMware Solution customers are also eligible for three years of free Extended Security Updates on 2008 versions of Windows Server and SQL Server. The combination of these unmatched pricing benefits on Azure ensures customers can simplify cloud adoption with cost efficiencies across their VMware environments.

In addition, at general availability Reserved Instances will also be available for Azure VMware Solution customers, with one-year and three-year options on dedicated hosts.

Global availability and expansion

The Azure VMware Solution preview is initially available in US East and West Europe Azure regions. We expect the new Azure VMware Solution to be generally available in the second half of 2020 and at that time, availability will be extended across more regions. Plans on regional availability for Azure VMware Solution will be made available here as they are disclosed.

To register your interest in taking part in the Azure VMware Solution preview, please contact your Microsoft Account Representative or contact our sales team.

Learn more about Azure VMware Solution on the Azure website.

Original em: https://azure.microsoft.com/pt-br/blog/microsoft-announces-next-evolution-of-azure-vmware-solution/

Six reasons customers trust Azure to run their SAP solutions

Talal Alqinawi Senior Director, Azure Marketing

As global organizations across every industry adjust to the new normal, SAP solutions are playing an increasingly vital role in addressing immediate needs and paving a path to a resilient future. Now more than ever, companies are realizing the value of running their SAP solutions in the cloud. While some are using advanced analytics to process their SAP data to make real-time business decisions, others are integrating their SAP and non-SAP data to build stronger supply chains. Whether it’s meeting urgent customer needs, empowering employees to make quick decisions, or planning for the future, customers running SAP solutions in the cloud have been well prepared to face the new reality. Check out how Walgreens delivers superior customer service with SAP solutions on Microsoft Azure.

Many organizations running their SAP solutions on-premises have become increasingly aware of the need to be more agile and responsive to real-time business needs. According to an IDC survey, 54 percent of enterprises expect the future demand for cloud software will increase. As global organizations seek agility, cost savings, risk reduction, and immediate insights from their ERP solutions, here are some reasons many of the largest enterprises choose Microsoft Azure as their trusted partner when moving their SAP solutions to the cloud.

1. Running SAP solutions on Azure delivers immediate insights and increased agility

“Now that we have SAP in the cloud … we have a platform for digital innovation in the cloud … With Azure, we’ve lifted our entire IT landscape up to a higher level where we can drive experimentation with much less risk and much less cost.”—Sarah Haywood, Chief Technology Officer and Vice President of Technology at Carlsberg Group

Organizations running SAP solutions on Azure gain real-time and predictive insights that empower them to break into new ways of doing business. Azure offers the ability to tap into more than 100 cloud services, access SAP Cloud Platform, apply intelligent analytics, and also integrate with an organization’s existing productivity and collaboration tools such as Microsoft 365, Microsoft Teams, Microsoft Power Apps, and Microsoft Power BI.

With Azure, organizations can integrate their SAP and non-SAP data through an extensive portfolio of Azure data services and create real-time dashboard views of the current operations using SAP and Microsoft business intelligence tools. Using intelligent analytics deepens real-time and predictive insights to improve decision-making by responding dynamically as business conditions change, and how that change impacts your customers or products. Integration with Teams and Microsoft 365 improves team collaboration and enhances user experience and productivity. Using Microsoft Power Automate, Power Apps, and Power BI, organizations can create customized workflows, apps, and business insight reports without having to write any code.

SAP1

2. An ever-evolving and growing set of Azure cloud services drives continuous innovation

“We are looking at drones, IoT, RFID sensors, artificial intelligence, chatbots, and every other futuristic technology you can think of to do mining better, and with Azure we have a broad foundation for exploring all that.”—Head of Enterprise IT Services, Rio Tinto

While Zuellig Pharma is building an app that uses Azure blockchain services and data from the SAP Business Suite on HANA to track and capture counterfeit products and illegal parallel imports in its region, Walgreens plans to use AI and machine learning to develop new customer offerings quickly and respond in real time to changes in the marketplace.

Customers such as Rio Tinto are using Azure’s secure and scalable IoT applications to pilot a solution to take real-time data from trucks, drills, smelters, and other equipment and analyze it to gain equipment health, preemptive maintenance, supply chain efficiency, and other operational intelligence. Additionally, with DevOps with GitHub and Azure Kubernetes Service, customers can build, manage, and deploy applications on a massive global network.

3. Running SAP solutions on Azure offers costs savings

“We chose to migrate to Azure for three main reasons: cost, strategy, and speed … We saw a big cost advantage with SAP HANA on Azure over the cloud we currently used”—David South, Director of Architecture at Coke One North America Services

Forrester study showed customers achieved more than 100 percent ROI, a 50 percent reduction in data center costs, and a 100 percent reduction in SAP release delays by migrating their SAP systems to Azure. Moving to Azure not only eliminates capital expenditure and cost of underutilized hardware, but it also offers cost management tools such as on-demand scaling during peak usage periods, using cheaper storage, and optimizing disaster recovery environments.

By running SAP solutions on Azure, organizations replace expensive, manual, and error-prone processes with automated, flexible processes, and with a single ticket-to-solution experience, enterprises empower employees to focus on value-added activities by putting data in their hands.

4. Running SAP solutions on Azure offers immense flexibility and scalability

“Moving to Azure gives us the scalability we need … running SAP on Azure gives us the agility and flexibility we need to disrupt the healthcare industry in a way that improves our customers’ access to the products and services they need.”—Dan Regalado, Vice President of Global Technology Transformation and Strategic Partnerships, Walgreens

Customers across every industry run their largest production SAP landscapes on Azure because it is a proven cloud platform certified by SAP to run their most mission-critical SAP applications. Azure offers the industry’s most performant and scale-able cloud infrastructure—offering 192 GB to 12 TB SAP HANA certified VMs in more regions than any other public cloud provider along with support for both Linux and Windows OS. Azure offers on-demand scalability and agility that reduces the time to market —customers can spin up or spin down resources as needed. For instance, Daimler AG reduced operational costs by 50 percent and increased agility by spinning up resources on-demand in 30 minutes with SAP S/4HANA and Azure. 

Azure also offers access to more than 1,000 pre-built integrations, out-of-the-box business services, SAP HANA services, and apps built by SAP and our partners. Customers such as Tate and Lyle appreciate that with Azure, they get access to compute, network, and storage resources preconfigured for SAP HANA that they didn’t have to build, install, or manage.

5. SAP solutions on Azure offer best-in-class security, compliance, and business continuity

“If you go to the Microsoft Trust Center, you can see the tremendous investment Microsoft makes in security certifications and compliance. It would have been very costly for Kennametal to implement that level of security within our own environment. Instead, we get to inherit it from Microsoft.”—John Johnston, Senior Manager, Global Information Security and Compliance, Kennametal

Azure’s intelligent security services are backed by a $1 billion annual investment in enterprise-grade security and compliance offers and 3,500 cybersecurity professionals. Azure has the most compliance offerings of any public cloud. Azure offers the best-in-class security services such as Azure Sentinel for SIEM, Azure security center for threat monitoring, and Azure Active Directory for identity management. Additionally, customers can leverage built-in availability and recovery options such as Azure Backup and Azure Site Recovery to ensure business continuity and data protection. Microsoft teams work closely with partners to ensure that critical systems remain online during migration and offer a robust set of joint planning workshops, migration programs such as FastTrack, POCs, and training and certifications.

6. Organizations benefit from the trusted partnership between SAP and Microsoft

“We needed a provider that enjoys a close partnership with SAP, understands our needs, and can accelerate our migration and expand our capabilities. Azure answered every need.”—Joshua Sefchek, Manager of Cloud and Enterprise Services, Toyota Material Handling North America

After decades of working together to serve our customers, SAP and Microsoft deepened their relationship by signing the Embrace initiative. As part of Embrace, SAP will lead with Azure to move on-premise SAP ERP and SAP S/4HANA customers to the cloud through industry-specific best practices, reference architectures, and cloud-delivered services. Our engineering teams co-residing in Germany and Redmond, Washington work together to develop joint reference architectures, product integration roadmaps, and best practices; our industry teams are jointly developing industry-specific transformation roadmaps, and our support teams have developed collaborative support models.

SAP and Microsoft have been partners for more than 25 years and are also mutual customers. Microsoft is the only cloud provider that’s been running SAP for its own finance, HR, and supply chains for the last 20 years, including SAP S/4HANA. Likewise, SAP has chosen Azure to run a growing number of its own internal system landscapes, including those based on SAP S/4HANA. Microsoft IT and SAP IT generously share their learnings from running SAP solutions on Azure with our customers. Check out the latest MSIT webinar and SAP IT webinar for some best practices.

More than 95 percent of Fortune 500 companies run their business on Azure. Our experience and history give us a powerful understanding of the needs of enterprise customers. Together with SAP, customers have trusted us with their most critical workloads for decades because we understand what it takes to support our customers in their journey to the cloud.

We look forward to seeing you this month at the virtual SAPPHIRE and ASUG events. Learn more about SAP solutions on Azure and read today’s announcement about new offerings to help our SAP customers optimize costs and increase agility.

Original em: https://azure.microsoft.com/pt-br/blog/six-reasons-customers-trust-azure-to-run-their-sap-solutions/