Erase and Rewind – Redeploying Oracle Database Appliance (2.3.0 image)

After using an Oracle Database Appliance for a bit of testing, it was time to get it ready for production. I decided to start with a fresh deploy and re-initializing shared storage.

Note that this is the 2.3.0 base image, and that the procedure may differ somewhat for other versions.

The included cleanupDeploy.pl script will pretty much just shut everything down, erase everything on the shared storage and reboot both nodes. It will leave the network configuration for the iLOM and most files stored on the system drives, including deployment configuration files you may have stored in /root/

It will remove network configuration and set the root password back to the default of welcome1.

[root@oma1 ~]# cd /opt/oracle/oak/onecmd
[root@oma1 onecmd]# ./cleanupDeploy.pl

Please enter the root password for performing cleanup:
Re-enter root password:

About to clear up OAK deployment,public network connectivity will be lost,root password will be set to default and both nodes will be rebooted

Do you want to continue(yes/no):

The script starts by asking twice for the root password before requesting a final confirmation. Make no mistake, after this point all services will be shut down and all data will be erased!

After a few minutes, the script completes with

Exiting…
[root@oma1 onecmd]#
Connection to oma1 closed by remote host.
Connection to oma1 closed.

and we are ready to start a new installation.

The first thing we need to do is connect to the iLOM either by serial or ethernet, or connect a keyboard and monitor so we can set up the initial network configuration. The cleanup script will not mess with the current configuration of the iLOM ethernet interface, so web and ssh console should be available.

-> start /SP/console
Are you sure you want to start /SP/console (y/n)? y

Serial console started. To stop, type ESC (

Oracle Linux Server release 5.8 Kernel 2.6.32-300.32.3.el5uek on an x86_64

oak1 login: root
Password:
Last login: Mon Oct 22 15:53:20 from 192.168.60.46

[root@oak1 ~]# cd /opt/oracle/oak/bin/
[root@oak1 bin]# ./oakcli configure firstnet

To work around a bug with the cleanupDeploy.pl we need to add a localhost entry to /etc/hosts on both nodes.

[root@oak1 ~]# echo 127.0.0.1 localhost.localdomain localhost >> /etc/hosts

I’m connected to the iLOM serial console through ssh, so I have to reconnect to the newly configured interface to enable X support for the deployment GUI.  I have a configuration file stored in the /root folder, so I call the deploy using that file as parameter file.

Connection to oma1-ilom closed.
roy@kayna:~$ ssh oma1 -l root -X
root@oma1’s password:
Last login: Tue Oct 23 13:01:39 2012 from 192.168.60.46
[root@oak1 ~]# cd /opt/oracle/oak/bin/
[root@oak1 bin]# ./oakcli deploy -conf /root/oak_deploy.params
Log messages in /tmp/oak_1350990145126.log
Loaded configuration file /root/oak_deploy.params successfully
Running Oracle Appliance Manager

When deployment starts, depending on your network configuration, the deploy will sometimes make a mess of the default route.  If you see this line when running route

default * 0.0.0.0 U 0 0 0 bond0

you will need to immediately delete the entry and set the correct default gateway (on both nodes), less the deploy may very well fail the ValidateEnv step, because it can’t access the DNS servers to check for the SCAN listener DNS entry.

[root@oak2 ~]# route delete default
[root@oak2 ~]# route add default gw 192.168.0.1

Hopefully the deploy will now complete successfully. If not, you can always jump back to the top, clean up the deploy and start fresh for another attempt.

Posted in General, ODA, Oracle | Tagged , , | 2 Comments

Designing an isolated front production network for ODA

With the introduction of Oracle Engineered Systems the DBA is moved somewhat closer to infrastructure design decisions, and expected to at least have some opinions on the network design requirements for the database.

As such, I’ve given some thought to how one can design a fully redundant production network for web applications with Oracle Database Appliance. I’m sure Oracle have some very clear ideas on how everything is meant to work together, but to me it was not all that clear. For a time, and even after reading all documentation and support notes related to ODA, I was convinced that the bonded interfaces meant setting up LACP or EtherChannel groups on the switches, thus requiring fully redundant distribution or core-level switches.

As it turns out, the ODA is factory configured with active-backup bonds. I’ve tested this to work well without any switch-side LACP/EtherChannel configuration, and each bonded connection may be split across two switches. In my tests, no simulated failure or network reconfiguration caused more than a a few hundred milliseconds worth of network outage.

This means that one can set up an isolated redundant front network for web applications using any layer two switches that are not inherently redundant.

Illustration of an isolated front production network

To avoid client connections taking the long way into the company network and back through the other switch (and thus making production dependent on that equipment), one can have a private VLAN that only lives on the two edge switches and on an EtherChannel trunk between them.

As such, only the application servers and the database appliance will exist on that virtual network segment.

I don’t see a way to control which path the connections from the application servers take to the database listeners, so the link between the two switches will have to be redundant, less this link becomes a single point of failure. This rules out using unmanaged switches without support for VLAN and either LACP or STP.

Using Cisco Catalyst 2960-series switches, I believe a combination of EtherChannel and Port Fast would be the better choice for a solid independent connection between the two. I would also use Port Fast on the ports for all the bonded connections to ODA and application servers.

Since the production network is isolated, one would need separate network connections for management, backup and connectivity to the rest of the company network.

Naturally, in order for this front production network to be fully self contained, any dependencies to external resources, such as DNS or authentication services, must also be resolved. Ideally production would be able to continue independently from any faults, ongoing maintenance or network outages anywhere else in the data center or company network.

 

Posted in ODA, Oracle, Technical | Tagged , , , | Leave a comment

Import data to a different tablespace using Data Pump

In a previous article I looked at how one could force imp to place user objects in the default tablespace of a user, in cases where the originating tablespace was not available in the destination database.

The newer Oracle Data Pump utilities, expdp and impdp provides a remapping option that lets you migrate data to a new tablespace during import. Data Pump was made the preferred way of doing imports and exports in Oracle 11g.

Data Pump reads and writes data to a directory on the database server, so the first thing we must do is define such a directory on both source and destination databases. The directory must be acessible by the oracle database.

SQL> CREATE DIRECTORY dpxfer1 AS ‘/u01/dpxfer1’;

We must also give the schema user access to the Data Pump directory:

GRANT READ,WRITE ON DIRECTORY dpxfer1 TO dbuser;

Then we’re set for the actual export. A number of options from the exp utility are available and will automatically be replaced by their new expdp equivalent

expdp dbuser/password DIRECTORY=dpxfer1 DUMPFILE=dbuser.dmp LOGFILE=dbuser.log CONSISTENT=Y

Legacy Mode Parameter: “consistent=TRUE” Location: Command Line, Replaced with: “flashback_time=TO_TIMESTAMP(‘2012-10-15 14:43:06’, ‘YYYY-MM-DD HH24:MI:SS’)”

As soon as we have transferred the dump file /u01/dpxfer1/dbuser.dmp to the destination server, we’re ready to run the import with the REMAP_TABLESPACE option, in this case placing any objects from the USERS tablespace on the source database into the DBUSER table space on the destination database. Naturally the user and table space must already exist on the destination, and the data pump directory as well as the associated user privilege must be added just like we did on the source database.

impdp dbuser/password DIRECTORY=dpxfer1 dumpfile=dbuser.dmp logfile=dbuser_imp.log REMAP_TABLESPACE=users:dbuser

 

Posted in Oracle, Technical | Tagged , , , , | Leave a comment

Import data to a different tablespace using imp

In the old days, it was common to store objects from assorted users in the USERS tablespace. In more modern days, it sometimes seems more appropriate to store user objects in a tablespace dedicated to that schema.

So, as I was transferring data to a new Oracle server, I found myself wishing to import data into a different tablespace than on the source database.

While the impdp tool includes a remap option to take care of such a scenario, the traditional imp tool does not. A workaround is to force all user objects into the default tablespace of that user, however this only works if the source tablespace does not exist.

As an example, I have a user blog_data on OLDDB with it’s objects (in this case, a single table with a single row, and the associated primary key index) stored in the users tablespace.

I’ll export the schema using something like:

exp blog_data/secret@OLDDB file=blog_data.dmp

Then on the destination database, I need to create a user with appropriate privileges, default tablespace and make sure the USERS tablespace does not exist. Assuming this is a new installation, I can simply set a new default permanent tablespace for the database and delete the empty “USERS” tablespace:

CREATE TABLESPACE users2 DATAFILE SIZE 100M;

ALTER DATABASE DEFAULT TABLESPACE users2;

DROP TABLESPACE USERS;

CREATE TABLESPACE blog_data DATAFILE SIZE 1M;

CREATE USER blog_data IDENTIFIED BY secretpassword DEFAULT TABLESPACE blog_data QUOTA UNLIMITED ON blog_data;

GRANT connect,resource TO blog_data;

Now I can simply import the schema into my new user, forcing imp to place all data in the default tablespace:

[oracle@oratest1 impexptest]$ imp blog_data/secretpassword@NEWDB file=blog_data.dmp

Import: Release 11.2.0.3.0 – Production on Fri Oct 12 23:15:44 2012

Export file created by EXPORT:V11.02.00 via conventional path
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing BLOG_DATA’s objects into BLOG_DATA
. . importing table “PEOPLE” 1 rows imported
Import terminated successfully without warnings.

The import completes without warnings, and we can then see that the table is in the right place:

[oracle@oratest1 impexptest]$ sqlplus “blog_data/secretpassword”

SQL> select table_name,tablespace_name from user_tables;

TABLE_NAME TABLESPACE_NAME
—————————— ——————————
PEOPLE BLOG_DATA

Posted in Oracle, Technical | Tagged , | 3 Comments

Documentation for odachk is now available on My Oracle Support.

The documentation for Oracle Database Appliance Configuration Audit Tool is now available on My Oracle Support note 1485630.1.

The odachk script, quite similar to the exachk tool on Exadata, verifies a large number of components in the ODA environment. This includes network and cluster configuration, some best practices as well as firmware and patch versions, and was made available on ODA in patch bundle 2.2.0

When I failed to find any documentation on the ODA version of the tool, I contacted Oracle who then created the above mentioned note with the odachk User Guide attached.

Posted in ODA, Oracle, Technical | Tagged , | Leave a comment

Benchmark tool for Windows network shares

After some tinkering on my filer, I once again found myself looking for a simple tool to check the performance of a network share under Windows (SMB/CIFS). The usual disk benchmark tools such as Crystal DiskMark and AS SSD benchmark typically only works for local block devices.

This time I was in luck, and found a blog entry which pointed out that an older version of  ATTO  seemed to be just the thing for a simple analysis of network share performance: http://sburke.eu/blog/2011/04/quick-disk-benchmark-for-windows/

Of course, now I need to work out why the filer’s external storage performs rather poorly over the network, and possibly how to improve on the situation.

Posted in Technical | Tagged , , , | Leave a comment

No more iBooks for me

I’m not sure why I’m surprised to find that the DRM protected books I bought from the Apple iTunes’s Music Store can’t be easily read in any other way than by the iBooks reader app, available only on iOS devices. Though, somehow I am.

I had a look at Requiem, but had no luck removing the DRM using this tool. Besides, it’s a little unclear whether this is presently allowed under Norwegian law.

Unfortunately, most new popular ebooks seem to be hassled down with digital restrictions of some sort, although at least the books from Amazone Kindle Store are readily available on multiple platforms including Windows, Mac OS, Mac iOS, Android and Kindle. Additionally  there is a web application, the Kindle Cloud Reader, making the books available on other platforms like Linux. It looks like the web app also supports offline reading.

The Norwegian ebooks I bought from stores connected to Bokskya.no, and possibly other ebooks readable by Adobe Digital Editions seems to decrypt nicely with Calibre and some plug-ins.

At the end of the day; I’m not buying any more DRM books from Apple iTunes Music Store.

Oh, and I do love my new Kindle 🙂

Posted in General | Tagged | Leave a comment

SMS generation have reached Oracle HQ

20120923-123252.jpg

Oracle Database Appliance asked me to “pls wait…” during installation 🙂

Posted in General | Tagged , , | Leave a comment

DLNA compatible Desktop streams from VLC

After a good bit of tinkering, I’ve come up with the following setup for generating a DLNA  compatible stream of a Windows or Linux desktop using Video Lan Connection.

Note that there is no audio in this configuration, as it was not a requirement for my project. Also, the screen resolution of the source computer should probably match the one of the stream: 1920 x 1080.

These streams works quite well with our Samsung 5-series Smart-TVs, and should work for most other DLNA or UPNP AV devices capable of h.264 formatted video in 1080p aka Full HD.

For reasons I have yet to figure out, the streams are detected as 4:3 format. I was able to easily work around this by setting the TV to force 16:9.

Typically I experience 5-10 seconds delay between the source computer and the rendering device.

On Linux the built in Screen:// video source seems to work nicely, while on Windows I found an open source DirectShow filter by Roger Pack to work somewhat better: Screen Capture Recorder

 

Linux:

vlc screen:// –screen-fps=25 –sout ‘#transcode{venc=x264{profile=low,preset=ultrafast,tune=animation,keyint=25}, vcodec=h264,width=1920,height=1080,fps=25,acodec=none}:http{mux=ps,dst=:8080/}’ –sout-keep

 

Windows (using screen-capture-recorder):

“C:\Program Files (x86)\VideoLAN\VLC\vlc.exe” -vvv dshow:// –sout=#transcode{venc=x264{profile=low,preset=ultrafast,tune=animation, keyint=25},vcodec=h264,width=1920,height=1080,fps=25, acodec=none}:std{access=http,mux=ps,dst=0.0.0.0:8080} –no-sout-rtp-sap –no-sout-standard-sap –ttl=1 –sout-keep :dshow-vdev=screen-capture-recorder :dshow-adev=none :dshow-caching=1000

 

You should now be able to reach the stream by pointing another instance of VLC to port 8080 of the source machine. You could even test with a second instance of VLC on the same host by opening http://localhost:8080/

The streams should be added to a DLNA or UPNP AV media server capable of serving live streams, as such I have had some luck using Serviio. It’s not open source, but there is a free version available for all of Linux, Windows and OS X.

Posted in Technical | Tagged , , , | Leave a comment

Make WordPress 3.4.2 show double hyphens

By default WordPress shortens two dashes or hyphens (–) to a single dash (-). Not very practical if you need to show a command that uses double-dash options such as the  “–all-databases” in the previous post about mysqldump over ssh.

I found that this behavior could be altered by editing wp-includes/formatting.php and removing the third element of each of the arrays on line 73 and 74 ($static_characters and $static_replacements).

 

Posted in Technical | Tagged | Leave a comment