How to Get Started With Autonomous Data Warehouse

Our previous post Data Warehouse 101: Introduction outlined the benefits of the Autonomous Data Warehouse–it’s simple, fast, elastic, secure, and best of all it’s incredibly easy to spin up an environment and start a new project.  If you read through the last post, you already know how to sign up for a data warehouse trial account and download SQL Developer and Data Visualization Desktop, both of which come free with the Autonomous Data Warehouse. 

Sign up for a Free Data Warehouse Trial Today

This post will focus on the steps to get started using the Oracle Autonomous Data Warehouse. We will provision a new Autonomous Data Warehouse instance and connect to the database using Oracle SQL Developer.

How to Use Autonomous Data Warehouse with Oracle Cloud Infrastructure

STEP 1: Sign in to Oracle Cloud

Go to cloud.oracle.com. Click Sign In to sign in with your Oracle Cloud account.
Enter your Cloud Account Name and click My Services.

Enter your Oracle Cloud username and password, and click Sign In.

STEP 2: Create an Autonomous Data Warehouse Instance

Once you are logged in, you are taken to the cloud services dashboard where you can see all the services available to you. Click Create Instance.

Note: You may also access your Autonomous Data Warehouse service via the pull out menu on the top left of the page, or by using Customize Dashboard to add the service to your dashboard.

Click Create on the Autonomous Data Warehouse tile. If it does not appear in your Featured Services, click on All Services and find it there.

Select the root compartment, or another compartment of your choice where you will create your new Autonomous Data Warehouse instance. If you want to create a new Compartment or learn more, click here.
Note – Avoid the use of the ManagedCompartmentforPaaS compartment as this is an Oracle default used for Oracle Platform Services.

Click on Create Autonomous Data Warehouse button to start the instance creation process.

This will bring up the Create Autonomous Data Warehouse screen where you will specify the configurations of the instance. Select the root compartment, or another compartment of your choice.

Specify a memorable display name for the instance. Also specify your database's name, for this lab use ADWFINANCE.

Next, select the number of CPUs and storage size. Here, we use 4 CPUs and 1 TB of storage.

Then, specify an ADMIN password for the instance, and a confirmation of it. Make a note of this password.

For this lab, we will select Subscribe To A New Database License. If your organization owns Oracle Database licenses already, you may bring those license to your cloud service.
Make sure everything is filled out correctly, then proceed to click on Create Autonomous Data Warehouse.

Your instance will begin provisioning. Once the state goes from Provisioning to Available, click on your display name to see its details.

You now have created your first Autonomous Data Warehouse instance. Have a look at your instance's details here including its name, database version, CPU count and storage size.

Because Autonomous Data Warehouse only accepts secure connections to the database, you need to download a wallet file containing your credentials first. The wallet can be downloaded either from the instance's details page, or from the Autonomous Data Warehouse service console.

STEP 4: Download the Connection Wallet

In your database's instance details page, click DB Connection.

Under Download a Connection Wallet, click Download.

Specify a password of your choice for the wallet. You will need this password when connecting to the database via SQL Developer later, and is also used as the JKS keystore password for JDBC applications that use JKS for security. Click Download to download the wallet file to your client machine.
Note: If you are prevented from downloading your Connection Wallet, it may be due to your browser's pop-blocker. Please disable it or create an exception for Oracle Cloud domains.

Connecting to the database using SQL Developer

Start SQL Developer and create a connection for your database using the default administrator account 'ADMIN' by following these steps.

STEP 5: Connect to the database using SQL Developer

Click the New Connection icon in the Connections toolbox on the top left of the SQL Developer homepage.

Fill in the connection details as below:

Connection Name: admin_high
Username: admin
Password: The password you specified during provisioning your instance
Connection Type: Cloud Wallet
Configuration File: Enter the full path for the wallet file you downloaded before, or click the Browse button to point to the location of the file.
Service: There are 3 pre-configured database services for each database. Pick <databasename>_high for this lab. For
example, if you the database you created was named adwfinance, select adwfinance_high as the service.

Note : SQL Developer versions prior to 18.3 ask for a Keystore Password. Here, you would enter the password you specified when downloading the wallet from ADW.

Test your connection by clicking the Test button, if it succeeds save your connection information by clicking Save, then connect to your database by clicking the Connect button. An entry for the new connection appears under Connections.
If you are behind a VPN or Firewall and this Test fails, make sure you have SQL Developer 18.3 or higher. This version and above will allow you to select the "Use HTTP Proxy Host" option for a Cloud Wallet type connection. While creating your new ADW connection here, provide your proxy's Host and Port. If you are unsure where to find this, you may look at your computer's connection settings or contact your Network Administrator.

Watch a video demonstration of provisioning a new autonomous data warehouse and connect using SQL Developer:

NOTE: The display name for the Autonomous Data Warehouse is ADW Finance Mart and the Database name is ADWFINANCE. This is for representation only and you can choose your name.

In the next post, Data Warehouse 101: Setting up Object Store, we will start exploring a data set, how to load and analyze the data set.

Written by Sai Valluri and Philip Li

Read more: blogs.oracle.com

On Saturday morning, the white stone buildings on UC Berkeleys campus radiated with unfiltered sunshine. The sky was blue, the campanile was chiming. But instead of enjoying the beautiful day, 200 adults had willingly sardined themselves into a fluorescent-lit room in the bowels of Doe Library to rescue federal climate data.

Like similar groups across the country—in more than 20 cities—they believe that the Trump administration might want to disappear this data down a memory hole. So these hackers, scientists, and students are collecting it to save outside government servers.

But now theyre going even further. Groups like DataRefuge and the Environmental Data and Governance Initiative, which organized the Berkeley hackathon to collect data from NASAs earth sciences programs and the Department of Energy, are doing more than archiving. Diehard coders are building robust systems to monitor ongoing changes to government websites. And theyre keeping track of whats been removedto learn exactly when the pruning began.

Tag It, Bag It

The data collection is methodical, mostly. About half the group immediately sets web crawlers on easily-copied government pages, sending their text to the Internet Archive, a digital library made up of hundreds of billions of snapshots of webpages. They tag more data-intensive projectspages with lots of links, databases, and interactive graphicsfor the other group. Called baggers, these coders write custom scripts to scrape complicated data sets from the sprawling, patched-together federal websites.

Its not easy. All these systems were written piecemeal over the course of 30 years. Theres no coherent philosophy to providing data on these websites, says Daniel Roesler, chief technology officer at UtilityAPI and one of the volunteer guides for the Berkeley bagger group.

One coder who goes by Tek ran into a wall trying to download multi-satellite precipitation data from NASAs Goddard Space Flight Center. Starting in August, access to Goddard Earth Science Data required a login. But with a bit of totally legal digging around the site (DataRefuge prohibits outright hacking), Tek found a buried link to the old FTP server. He clicked and started downloading. By the end of the day he had data for all of 2016 and some of 2015. It would take at least another 24 hours to finish.

The non-coders hit dead-ends too. Throughout the morning they racked up 404 Page not found errors across NASAs Earth Observing System website. And they more than once ran across empty databases, like the Global Change Data Centers reports archive and one of NASAs atmospheric CO2 datasets.

And this is where the real problem lies. They don’t know when or why this data disappeared from the web (or if anyone backed it up first). Scientists who understand it better will have to go back and take a look. But meantime, DataRefuge and EDGI understand that they need to be monitoring those changes and deletions. Thats more work than a human could do.

So theyre building software that can do it automatically.

Future Farming

Later that afternoon, two dozen or so of the most advanced software builders gathered around whiteboards, sketching out tools theyll need. They worked out filters to separate mundane updates from major shake-ups, and explored blockchain-like systems to build auditable ledgers of alterations. Basically its an issue of what engineers call version control—how do you know if something has changed? How do you know if you have the latest? How do you keep track of the old stuff?

There wasnt enough time for anyone to start actually writing code, but a handful of volunteers signed on to build out tools. Thats where DataRefuge and EDGI organizers really envision their movement goinga vast decentralized network from all 50 states and Canada. Some volunteers can code tracking software from home. And others can simply archive a little bit every day.

By the end of the day, the group had collectively loaded 8,404 NASA and DOE webpages onto the Internet Archive, effectively covering the entirety of NASAs earth science efforts. Theyd also built backdoors in to download 25 gigabytes from 101 public datasets, and were expecting even more to come in as scripts on some of the larger datasets (like Teks) finished running. But even as they celebrated over pints of beer at a pub on Euclid Street, the mood was somber.

There was still so much work to do. Climate change data is just the tip of the iceberg, says Eric Kansa, an anthropologist who manages archaeological data archiving for the non-profit group Open Context. There are a huge number of other datasets being threatened with cultural, historical, sociological information. A panicked friend at the National Parks Service had tipped him off to a huge data portal that contains everything from park visitation stats to GIS boundaries to inventories of species. While he sat at the bar, his computer ran scripts to pull out a list of everything in the portal. When its done, hell start working his way through each quirky dataset.

UPDATE 5:00pm Eastern, 2/15/17: Phrasing in this story has been updated to clarify when changes were made to federal websites. Some data is missing, but it is still unclear when that data was removed.

Source article viahttp://www.wired.com/