Configurando Proxy Reverso no Apache

Existem diversas situações que podem requerer a configuração de um proxy reverso. As mais comuns que encontramos no nosso dia-a-dia são para criar uma camada adicional de proteção para um servidor que pode estar rodando um software mais antigo e não apresentar nível de segurança suficiente para ficar exposto na internet.

Outra situação é configurar um balanceamento de carga entre servidores de aplicação, ficando o proxy reverso na função de centralizar as conexões e distribuí-las para os servidores de aplicação disponíveis.

A configuração mais simples é a que possui apenas um servidor de aplicação como backend sendo acessado pelo proxy reverso. Sua configuração requer apenas as instruções abaixo.

ProxyPass "/" "http://localhost:8080/"
ProxyPassReverse "/" "http://localhost:8080/"

Essas instruções devem, preferencialmente, estar dentro de um VirtualHost, para evitar confusões com outros sites que o Apache possa estar servindo. Mas caso seja um servidor usado apenas para o proxy reverso de um site apenas, é possível colocar na sessão principal do httpd.conf.

O que essa instrução faz é passar todas as requisições para o backend. No entanto, é possível fazer proxy apenas de determinadas pastas.

ProxyPass "/imagens/" "http://localhost:8080/imagens/"
ProxyPassReverse "/imagens/" "http://localhost:8080/imagens/"

Nesse exemplo acima, no caso de uma requisição para http://www.exemplo.com.br/imagens/logo.png seria feito o proxy para o serviço rodando na porta 8080, mas uma requisição para http://www.exemplo.com.br/index.php seria servido pelo próprio Apache local.

Uma questão importante é que os diretórios do proxy e do backend não precisam ser sempre iguais.

ProxyPass "/" "http://localhost:8080/site/"
ProxyPassReverse "/" "http://localhost:8080/site/"

Nas instruções acima, um acesso ao endereço http://www.exemplo.com.br/index.php seria direcionado para o backend no caminho http://localhost:8080/site/index.php

Isso é bastante útil e permite inclusive que caminhos diferentes da URL sejam servidos por servidores backend diferentes.

Atualizando Hosts Standalone do VMware ESXi

Quando se está utilizando o VMware ESXi como virtualizador sem o auxilio do vCenter para fazer a gestão, o processo de atualização e aplicação de patches pode ser um pouco complicada. Nesse post vou mostrar duas formas de fazer essa atualização.

As duas formas precisam ser executadas via linha de comando no host, portanto habilite o acesso SSH no ESXi antes de iniciar.

Também é necessário que o host esteja em modo de manutenção e todas as VMs desligadas.

A primeira forma, que teoricamente é a mais simples, envolve atualizar automaticamente baseado no profile do host.

Identifique a versão atual do VMware ESXi:

# esxcli system version get

Depois verifique qual o Image Profile do Host:

# esxcli software profile get

Habilite o acesso internet a partir do host ESXi, liberando nas regras de firewall local. Caso necessário, libere também o acesso no seu firewall de borda.

# esxcli network firewall ruleset set -e true -r httpClient

Para atualizar o ESXi a partir do repositório online, utilize o comando abaixo:

# esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-20190402001-standard

Onde o parâmetro “-p” especifica o Image Profile que foi identificado no passo anterior.

O segundo modo é semelhante, mas ao invés de fazer a atualização a partir do repositório remoto, deve-se primeiro baixar o arquivo (bundle) de atualização e aplicá-lo.

Depois de baixar, coloque o arquivo .zip do bundle de atualização em um datastore do host e utilize o comando de “vib update” passando no parâmetro “-d” o caminho do arquivo de bundle.

# esxcli software vib update -d /vmfs/volumes/5cb37c65-0d32cf9e-7ed2-984be1791ea8/VIB/ESXi670-202004002.zip

Depois da atualização deverá ser realizado um reboot no host.

Configurar RedHat 5 para utilizar o repositório do CentOS 5 no Vault

Antes de mais nada é preciso dizer que não é recomendado utilizar uma versões tão antiga do RedHat/CentOS, principalmente se ela estiver acessível publicamente na Internet, visto que são versões já sem suporte à algum tempo e que não estão recebendo atualizações de segurança que podem ser críticas.

Mas se por qualquer necessidade você precisa utilizar essa versão, este procedimento pode ser útil. Principalmente se você utiliza o RedHat e não possui uma subscription ativa.

Aviso: Não execute esse procedimento em instalações RedHat que estejam em produção. Apesar do CentOS ser teoricamente uma ‘cópia’ do RedHat, compilado a partir dos mesmos códigos fonte, ainda assim é possível que em nível binário existam diferenças que quebrem a aplicação que está rodando no servidor.

Limpando cache local do yum:

# yum clean all

Baixando pacotes mais recentes do release do CentOS 5, do YUM e da lista de mirrors:

# wget http://vault.centos.org/5.11/os/x86_64/RPM-GPG-KEY-CentOS-5
# wget http://vault.centos.org/5.11/os/x86_64/CentOS/centos-release-5-11.el5.centos.x86_64.rpm
# wget http://vault.centos.org/5.11/os/x86_64/CentOS/centos-release-notes-5.11-0.x86_64.rpm
# wget http://vault.centos.org/5.11/os/x86_64/CentOS/yum-3.2.22-40.el5.centos.noarch.rpm 
# wget http://vault.centos.org/5.11/os/x86_64/CentOS/yum-updatesd-0.9-6.el5_10.noarch.rpm
# wget http://vault.centos.org/5.11/os/x86_64/CentOS/yum-fastestmirror-1.1.16-21.el5.centos.noarch.rpm

Importando a chave GPG do CentOS:

# rpm --import RPM-GPG-KEY-CentOS-5

Removendo o pacote de release dele do RedHat, pois será substituído pelo do CentOS:

# rpm -e --nodeps redhat-release

Instalar os pacotes baixados:

# rpm -Uvh --force yum*.rpm centos*.rpm

Atualizando o arquivo de repositório Base com as informações do repositório do Vault.

# head -37 /etc/yum.repos.d/CentOS-Vault.repo > /etc/yum.repos.d/CentOS-Base.repo
# sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/CentOS-Base.repo

Removendo arquivo desnecessário com as informações do Vault para evitar conflitos:

# rm -f /etc/yum.repos.d/CentOS-Vault.repo

Em seguida já pode rodar o yum para instalar os pacotes necessários, como por exemplo:

# yum install net-snmp

Converter um certificado PFX para o formato PEM

Podemos converter um arquivo de certificado PFX em dois tipo de arquivos PEM: deixando o certificado e a chave privada no mesmo arquivo, conhecido como PEM combinado, ou separando o certificado e a chave privada em arquivos separados. Nos dois casos iremos utilizar o utilitário OpenSSL que possui compilações tanto para Linux quando para Windows.

Para converter um certificado PFX em um arquivo PEM combinado, utilize o comando abaixo.

# openssl pkcs12 -in certificado.pfx -out certificado.pem -nodes

Onde:
– certificado.pfx: o arquivo de entrada no formato PFX.
– certificado.pem: o arquivo PEM que será gerado, contendo tanto o certificado quando a chave privada.

Para converter o certificado PFX para arquivos PEM separados (certificado + chave privada), utilize os comandos abaixo.

Extraindo a chave privada:

# openssl pkcs12 -in certificado.pfx -nocerts -out certificado.key -nodes

Onde:
– certificado.pfx: o arquivo de entrada no formato PFX.
– certificado.key: o arquivo gerado com a chave privada do certificado.

Extraindo o certificado:

# openssl pkcs12 -in certificado.pfx -clcerts -nokeys -out certificado.pem

Onde:
– certificado.pfx: o arquivo de entrada no formato PFX.
– certificado.pem: o arquivo que será gerado com o certificado no formato PEM.

Obs.: Importante ressaltar que os arquivos PEM gerados com a chave privada não estarão protegidos por senha devido ao uso do parâmetro “-nodes”.

Google breaks AI performance records in MLPerf with world’s fastest training supercomputer

Naveen Kumar – Google AI

Fast training of machine learning (ML) models is critical for research and engineering teams that deliver new products, services, and research breakthroughs that were previously out of reach. Here at Google, recent ML-enabled advances have included more helpful search results and a single ML model that can translate 100 different languages.

The latest results from the industry-standard MLPerf benchmark competition demonstrate that Google has built the world’s fastest ML training supercomputer. Using this supercomputer, as well as our latest Tensor Processing Unit (TPU) chip, Google set performance records in six out of eight MLPerf benchmarks.

Speedup_of_Googles_best_M.0860040916660770.png

Figure 1: Speedup of Google’s best MLPerf Training v0.7 Research submission over the fastest non-Google submission in any availability category. Comparisons are normalized by overall training time regardless of system size, which ranges from 8 to 4096 chips. Taller bars are better.1

We achieved these results with ML model implementations in TensorFlowJAX, and LingvoFour of the eight models were trained from scratch in under 30 seconds. To put that in perspective, consider that in 2015, it took more than three weeks to train one of these models on the most advanced hardware accelerator available. Google’s latest TPU supercomputer can train the same model almost five orders of magnitude faster just five years later.

In this blog post we’ll look at some of the details of the competition, how our submissions achieve such high performance, and what it all means for your model training speed.

MLPerf models at-a-glance

MLPerf models are chosen to be representative of cutting-edge machine learning workloads that are common throughout industry and academia. Here’s a little more detail on each MLPerf model in the figure above:

  • DLRM represents ranking and recommendation models that are core to online businesses from media to travel to e-commerce
  • Transformer is the foundation of a wave of recent advances in natural language processing, including BERT
  • BERT enabled Google Search’s “biggest leap forward in the past five years” 
  • ResNet-50 is a widely used model for image classification
  • SSD is an object detection model that’s lightweight enough to run on mobile devices
  • Mask R-CNN is a widely used image segmentation model that can be used in autonomous navigation, medical imaging, and other domains (you can experiment with it in Colab)

In addition to the industry-leading results at maximum scale above, Google also provided MLPerf submissions using TensorFlow on Google Cloud Platform that are ready for enterprises to use today. You can read more about those submissions in this accompanying blog post.

The world’s fastest ML training supercomputer

world’s fastest ML training supercomputer.jpg

The supercomputer Google used for this MLPerf Training round is four times larger than the Cloud TPU v3 Pod that set three records in the previous competition. The system includes 4096 TPU v3 chips and hundreds of CPU host machines, all connected via an ultra-fast, ultra-large-scale custom interconnect. In total, this system delivers over 430 PFLOPs of peak performance.

MLPerf submissions trained from scratch.jpg

Table 1: All of these MLPerf submissions trained from scratch in 33 seconds or faster on Google’s new ML supercomputer.2

Training at scale with TensorFlow, JAX, Lingvo, and XLA

Training complex ML models using thousands of TPU chips required a combination of algorithmic techniques and optimizations in TensorFlowJAXLingvo, and XLA. To provide some background, XLA is the underlying compiler technology that powers all of Google’s MLPerf submissions, TensorFlow is Google’s end-to-end open-source machine learning framework, Lingvo is a high level framework for sequence models built using TensorFlow, and JAX is a new research-focused framework based on composable function transformations. The record-setting scale above relied on model parallelism, scaled batch normalization, efficient computational graph launches, and tree-based weight initialization. 

All of the TensorFlow, JAX, and Lingvo submissions in the table above—implementations of ResNet-50, BERT, SSD, and Transformer—trained on 2048 or 4096 TPU chips in under 33 seconds each.

TPU v4: Google’s fourth-generation Tensor Processing Unit chip

Google’s fourth-generation TPU ASIC offers more than double the matrix multiplication TFLOPs of TPU v3, a significant boost in memory bandwidth, and advances in interconnect technology. Google’s TPU v4 MLPerf submissions take advantage of these new hardware features with complementary compiler and modeling advances. The results demonstrate an average improvement of 2.7 times over TPU v3 performance at a similar scale in the last MLPerf Training competition. Stay tuned, more information on TPU v4 is coming soon

tpu_v4_speedups_over_tpu_v3.png

Figure 2: TPU v4 results in Google’s MLPerf Training v0.7 Research submission show an average improvement of 2.7 times over comparable TPU v3 results from Google’s MLPerf Training v0.6 Available submission at the identical scale of 64 chips. Improvements are due to hardware innovations in TPU v4 as well as software improvements.3

Rapid, ongoing progress

Google’s MLPerf Training v0.7 submissions demonstrate our commitment to advancing machine learning research and engineering at scale and delivering those advances to users through open-source software, Google’s products, and Google Cloud.

You can use Google’s second-generation and third-generation TPU supercomputers in Google Cloud today. Please visit the Cloud TPU homepage and documentation to learn more. Cloud TPUs support TensorFlow and PyTorch, and a JAX Cloud TPU Preview is also available.


1. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Chart compares results: 0.7-70 v. 0.7-17, 0.7-66 v. 0.7-31, 0.7-68 v. 0.7-39, 0.7-68 v. 0.7-34, 0.7-66 v. 0.7-38, 0.7-67 v. 0.7-29.
2. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Table shows results: 0.7-68, 0.7-66, 0.7-68, 0.7-66, 0.7-68, 0.7-65, 0.7-68, 0.7-66.
3. All results retrieved from www.mlperf.org on July 29, 2020. MLPerf name and logo are trademarks. See www.mlperf.org for more information. Figure compares results 0.7-70 v. 0.6-2.

Original em: https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

by Japjot Walia and Jonathan Shapiro-Ward

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMakerAmazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

Original em: https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

Linux: Alterando discos e partições sem reiniciar

Caso tenha alterado o tamanho de um disco, por exemplo, aumentando o tamanho de um disco já existente em uma VM, use o comando abaixo para forçar o kernel do linux a reler as informações do disco.

# echo 1 > /sys/class/block/sdd/device/rescan

Sendo que “sdd” no comando acima é o dispositivo do disco que teve o tamanho alterado.

Caso tenha incluído ou removido um ou mais discos do servidor, utilize o comando abaixo para forcar uma releitura nos barramentos das controladoras por alterações de dispositivos. É importante ressaltar que o comando abaixo não tem nenhum efeito no caso de alteração de tamanho para um disco já reconhecido previamente pelo sistema.

# for host in /sys/class/scsi_host/*; do echo "- - -" | tee $host/scan; lsblk -S ; done

Após utilizar os comandos acima e os discos estarem devidamente reconhecidos com os tamanhos corretos, é possível seguir com a criação e/ou redimensionamento do sistema de arquivos.

How Oracle Cloud Customers Can Turn GDPR and CCPA Into a Business Advantage


Saswata Basu

CEO AND FOUNDER OF 0CHAIN

Enterprises face data protection and privacy liability issues based on new General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) regulations, which carry fines of up to $25M. Both regulations empower consumers with key data protection rights, including knowing how their data is being used, a right to access, and a right to opt-out of having their data sold to third parties. In light of these new protections, a staggering €500M has been fined since GDPR enforcement began in mid-2018. This problem is growing, and enterprises need a simple, automated solution.

Current Solutions

Today, several software management solutions have designed workflows to manage compliance, such as awareness, data mapping, consumer request fulfillment, cookie management, vendor risk, and incidence reports. These great tools help manage the requirement and avert fines. However, these processes typically require several people to manage the effort and are difficult to scale when customer requests and deletion rates grow exponentially. Fundamentally, the customer does not achieve real privacy and transparency, and the company is still liable for privacy breaches, regardless of compliance requirements.

A New Approach

For organizations to truly solve the privacy issue and achieve zero-liability, they must give full ownership and control of data to the customer and provide transparency of data activities. The following architecture allows you to own a storage allocation on a trusted platform to upload their data, and then share an encrypted link to the company. The business then uses this auth token to download a copy for their application, without any change to their current IT processes. In this three-step process, where you have the following capabilities:

  • Owning data and can upload, updated, delete encrypted data.
  • Sharing the authentication token to the company.
  • Allowing the company to download their data.
A graphic depicting the three-step architecture for user storage, allocation, and sharing of encrypted data.

This simple three-step process is recorded and displayed transparently to customers, without the need for complex processes that can’t scale with customer requests for access and deletion of their data. The responsibility lies with the customer, and not the company.

Shifting Liability

With user ownership, control of data, and a transparent process of securely sharing it to the company, the onus of privacy protection lies with the customer. For the hacker, it’s difficult to attack a lot of customers, because they would need to steal keys individually.

With the liability shifted to you, organizations must ensure that the data is well protected, that all activities are recorded on the ledger, and prevent any potential for breach.

Preventing Breach

Copies are inherently vulnerable because a hacker needs to attack only the most vulnerable server to get access to all customer data. One approach to prevent this attack mode is to split the data into multiple servers with different keys. Now, the attacker needs all the keys to get access. The implementation of this architecture is fairly simple as the server access keys can be distributed among teams and individuals within the organization.

A graphic depicting the separation of server access keys.

The Business Advantage

0Chain is deployable directly from Oracle Cloud Marketplace. It provides automation, liability and breach protection, immutability, transparency, and a trusted platform for the Oracle Cloud Infrastructure customers to not only abide by GDPR and CCPA regulations, but also use them to their business advantage. With 0Chain, organizations can brand themselves as a leader in privacy, gain higher usage, add revenue, and differentiate their products in the market.

You can deploy 0Chain on-premises, through cloud, or in hybrid environments. The user interface automatically creates a key and allocation and stores the key based on your password. So, only you can access you key and data. This key is registered on the platform’s blockchain and any action of uploading a new file (such as a post, image, or video), updating an existing file (such as user profile data), and deleting them are recorded on the ledger.

The files are automatically shared to the business through an auth token. So, only the business can decrypt and use it for their operations. Each time the company uses customer data, they need to make a note of the metadata on the blockchain, creating a transparent audit trail for you to instantly visualize it and have complete trust in the organization. The company can handle this process as a parallel batch operation offline, without the need to block existing inline data used by the company for their daily business activities.

Migrating customer data to this trusted platform requires a simple, phased approach. In the first phase, only new data is sent to the platform. In the second phase, older data is migrated. In subsequent phases, granular datasets can be implemented to have a higher level of precision permission settings that the company can offer the customer, perhaps with a fee to cover for development costs.

FAQ

  • How do you achieve zero liability?
    Since you own and controls the data, the company is not liable. You provide explicit permission through signed transactions, which can’t be disputed since their action is recorded on the blockchain and is immutable—something that a company can’t go back and change in their database records.
  • How do you make breach impossible?
    Since the file is split into multiple servers, a hacker needs to have keys to all of them to gain access. This configurable feature, with each server split, makes it more difficult to get hold of the keys, as long as they’re distributed across individuals and teams.
  • Does 0Chain replace the current privacy software tools?
    No. 0Chain helps you protect your data better and shift liability back to you. It can handle billions of customer privacy requests and provides instant compliance reports from ledger transactions, using integrated search tool for specific files, users, and activities that anyone can access. Through these efforts, 0Chain provides an open, trusted compliance platform.
  • How do you ensure that the enterprise aligns with your consented dataset?
    Whenever a dataset is used by the enterprise, they send a signed transaction with the metadata of the content to record the activity. If the company misuses the dataset, you can mount a challenge based on the exposed data and the consent that’s recorded in the ledger.

Want to Know More?

To learn more, visit our website and trial our product for free through Oracle Cloud Marketplace.

Original em: https://blogs.oracle.com/cloud-infrastructure/how-oracle-cloud-customers-can-turn-gdpr-and-ccpa-into-a-business-advantage

How Oracle is Helping You Maintain a Strong Security Posture in the Cloud


Paul Toal

DISTINGUISHED SOLUTION ENGINEER – CYBER SECURITY

This is a syndicated post, view the original post here

So, you’ve just signed up to a shiny new cloud provider. It’s exciting when you realise that you not only have an almost unlimited supply of Infrastructure-as-a-Service (IaaS) at your fingertips, but you also have a plethora of various platform services just waiting for you to use. However, before you get carried away spinning up compute and uploading your files into storage, you need to realise that you have a shared responsibility for security, as shown in Figure 1.

Figure 1 – Security is not just the job of the Cloud Service Provider

Sure, the Cloud Service Provider (CSP) has a set of security responsibilities, but so do you. At a minimum, irrespective of whether you are using Software, Platform, or Infrastructure as-a-service, you will always be responsible for your data, your users, and to some extent, your configuration. As you move away from SaaS towards IaaS, your responsibilities grow as you become responsible for software, operating systems, patching etc.

It seems that, whilst we have talked for a long time as an industry about the cloud security shared responsibility model, there is still plenty of confusion out there. The two statistics in Figure 1 come from the Oracle and KPMG Cloud Threat Report 2019, and reviewing the figures from the recently released report for 2020, the situation isn’t any better. Only 8% of this year’s respondents stated that they fully understand the cloud security shared responsibility model. I’ve discussed this topic before when looking at how “Security must be easier and not just for the experts”.

In this article, I want to look at Cloud Security Posture Management (CSPM) and some of the use cases that come to mind, as well as those that I am hearing from customers. I’ll discuss a number of use cases, why they are important, and how Oracle Cloud Infrastructure (OCI) is helping you to meet and address your shared responsibilities. So, if you are ready, let’s get started.

Before we can look at use cases, we need to understand what we mean by Cloud Security Posture Management. Simply put, it is looking at how you ensure that your cloud environment is configured in a secure manner, that it remains secure over time, and that configuration changes or activities don’t weaken that posture. Gaining that secure position, never mind maintaining it can be difficult, due to a number of factors, including:

  1. Larger exposure due to incredible rate of growth of cloud resources
  2. More Cloud services mean more complexity and more settings to manage
  3. Fewer experts caused by a large IT security skills shortage

Let’s examine some use cases that we need to address with CSPM and then we’ll discuss how Oracle can help you to meet your security responsibilities in this area.

Use Case 1 – Data exposure through public buckets

No doubt you will have seen plenty of data breach stories in the media where sensitive data was found on object storage buckets that had public visibility. There may be valid use cases where a bucket should be public, however this should be on a very tightly controlled exception basis.

In OCI, it’s difficult to create a public object storage bucket by mistake. First you create the bucket, then you change its visibility to make it public.

This may be an intentional change in visibility of the bucket, but this also could have been a temporary change, such as for debugging or testing.

While the bucket is public and if it contains sensitive data, you are at risk of a data breach. The attacker just needs to find the bucket on the internet, and believe me, there are plenty of people looking for them.

Use Case 2 – Ensuring only approved OS types and versions are used for compute
Another common use case is looking at the images that should be used when creating new compute instances. OCI provides a wide variety of images that can be used, including:

  • Platform images, e.g. Windows Server, Ubuntu, Linux etc.
  • Oracle Images, e.g. E-Business Suite, Enterprise Manager etc.
  • Partner Images, e.g. Next generation firewalls, GPU machines etc.

You can also bring your own images as well as using existing boot volumes. Now, imagine you have a standard, approved OS type that your compute instances must be built with, or you have a custom image that you want to ensure is used as a gold build for all instances. This image may have your corporate standard IT tools on it such as anti-virus, and corporately approved packages. It may also have a number of services removed or hardening policies applied.

As part of enforcing your security policy, you need to ensure that all compute instances are using the approved OS types and versions, or are using your gold build images.

This means you need to identify any compute instances that don’t use the approved images. Furthermore, you may also want to automatically shutdown any instances violating that policy, or even terminate them. In some cases, you may also disable the account of the administrator who is creating these non-approved compute instances.

Use Case 3 – Adding internet-based routes to your Virtual Cloud Network
The next use case is addressing network access to your cloud environment. One common design pattern for cloud deployments is as an expansion to your existing data centre. In these cases, it is common for a VPN or private connection (called FastConnect in OCI) to be deployed between your data centre and your cloud environment. All access to those cloud services are directed down this connection and there is no direct access over the internet.

Now, let’s take the scenario where a network administrator makes a change to your virtual cloud network (VCN). They add an internet gateway and change the routing rules for this new gateway. In our use case here, neither of those actions should be performed on this particular VCN. It might be that the administrator has changed the wrong VCN by mistake, or it could be something more nefarious. Either way, the change needs to be identified quickly and fixed to ensure that any the security risk is minimised.

Use Case 4 – Key Rotation
For our final use case, let’s think about key management. Cryptographic keys are used in lots of places, whether as the basis for in-transit encryption, or for encryption at rest. Many organisations have IT security policies governing the lifecycle and use of keys, including how often keys must be changed.

Within OCI, Oracle will manage keys for you where you have no policy stipulating that you must manage your own. Any time you create a storage device (e.g. object storage bucket, boot volume, block volume, file storage), then it will be encrypted with an Oracle-managed key. However, we also provide you the ability to manage your own keys, through OCI Vault, a service backed by highly-available FIPS 140-2 Level 3 Hardware Security Modules (HSMs).

If you do choose to manage your own keys, you will likely need to rotate them periodically to ensure the amount of data encrypted by any one key is not too great.

Therefore, being able to identify keys that you manage that haven’t been rotated in-line with your security policy is important. Even better would be automatically rotating those keys to help ensure you’re meeting all of your regulatory compliance needs and industry best practices.

Now, we’ve talked about a number of use cases, we understand the problem. So, how is Oracle helping in this area?

Back at Oracle OpenWorld 2019, our vision for OCI security was announced and it was focused on making security easier, more automated, and always-on. To deliver that vision, a number of capabilities were announced including Oracle Cloud Guard. Some of the key design principles of Cloud Guard include:

One of the most interesting design principles is our use of Embedded Expertise. What this means is that Oracle knows OCI best. We know what security controls are available and how best to apply them at scale. We also know what problems to look for and how to apply security features to mitigate those problems. By applying all of our own embedded expertise we are taking the burden away from you and removing the need for you to build all of these policies yourself.

Let’s take one of our use cases above as an example and look at how Cloud Guard would address a security risk such as a public bucket.

Within Cloud Guard, Oracle uses our own embedded expertise to create out-of-the-box rules to identify common problems, including, in this case, the detection of any buckets that are public. Of course, you can tune the rules to add various conditions. For example, in this particular rule, we can tune it to exclude any buckets that are authorised to be public.

In my scenario, I have created two buckets and made them both public. As my very inventive names suggests, one of my buckets is allowed to be public and therefore shouldn’t trigger any alerts.

Within Cloud Guard I am using the out-of-the-box rules but have tuned the detector rule for public bucket detection to exclude the bucket named authorised-public-bucket.

Now, Cloud Guard will identify any issues it finds within OCI as Problem, but we can see that in this scenario, it only identifies the unauthorised-public-bucket as a Problem.

Note also, how Cloud Guard also recognised that we are using the default, Oracle-managed keys for the encryption of these buckets rather than our own customer-managed keys through OCI-Vault.

If configured to do so, Cloud Guard can automatically remediate the problem, in this case, by changing the bucket back to private, which is has done here.

It can also send notifications. In my case, I am sending all Critical notifications to Slack, and all non-critical notifications to email. However, I could just as easily send them to PagerDuty, a custom HTTPs URL, or even call a serverless function using OCI functions ( by writing a function using the opensource fn project):

As you can see, Cloud Guard has not only identified a problem with my object storage buckets, but it has informed me about it as well as provided, automated closed-loop remediation to remove the security risk.

Cloud Guard is currently in Limited Availability, due to be released later this year. Therefore, don’t forget to refer to Oracle’s Safe Harbor statement below:

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Original em: https://blogs.oracle.com/cloud-infrastructure/how-oracle-is-helping-you-maintain-a-strong-security-posture-in-the-cloud-v2

Redimensionando partições de disco no Linux

Recomendo usar o utilitário “parted”, pois o “fdisk” precisa remover e recriar a partição. O risco é muito maior de dar problema.

No exemplo abaixo vamos redimencionar a partição 2 do disco /dev/sda para ocupar todo o espaço livre existente no disco. Esse espaço foi gerado quando o disco foi aumentado no VMware (ou outro virtualizador qualquer) ou quando fez um clone para um disco maior.

# parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 1075MB 1074MB primary ext4 boot
2 1075MB 172GB 171GB primary lvm

(parted) resizepart 2 100%
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 1075MB 1074MB primary ext4 boot
2 1075MB 268GB 267GB primary lvm

(parted) quit
Information: You may need to update /etc/fstab.

Depois disso, sendo a partição um LVM, basta expandir o PV:

# pvresize /dev/sda2
Physical volume "/dev/sda2" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Depois disso o procedimento para expandir o volume logico LV é o mesmo de quando é feita adição de novos discos.

# lvextend -r -l +100%FREE /dev/cl_aplprd01dc/root
Size of logical volume cl_aplprd01dc/root changed from <151.00 GiB (38655 extents) to <241.00 GiB (61695 extents).
Logical volume cl_aplprd01dc/root successfully resized.
meta-data=/dev/mapper/cl_aplprd01dc-root isize=512 agcount=4, agsize=9895680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=39582720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=19327, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 39582720 to 63175680