This session will provide the OpenVMS practitioner the information needed to leverage the HPE 3PAR storage systems.
Beyond an overview of the unique requirements of OpenVMS on 3PAR, this session will include information about driving storage management / monitoring operations via DCL / Python / SSH such as on-line snapshots and presentation management. Performance considerations and modern presentation techniques such as Thin provisioning and reduplication of data will also be covered.
The BLISS programming language originated at Carnegie-Mellon University in
1969, originally for the DEC PDP-10. BLISS was adopted as DEC's
implementation language for use on its new line of VAX computers in 1975.
DEC developed a completely new generation of BLISSs for the VAX, PDP-10 and
PDP-11, which became widely used at DEC through the 1990s. With the creation
of the Alpha architecture and the growth of IA32 and IA64 architectures,
BLISS was enhance and implemented there as well. In addition to within
OpenVMS itself, many layered products and associated components are
developed primarily in BLISS.
This session provides an introduction to BLISS for people familiar with
other programming languages and on OpenVMS in particular. The unique
capabilities and characteristics of BLISS discussed include the use of an
explicit contents of operator (written as a period or ‘dot’), an algorithmic
approach to data structure definition, begin an expression language, and its
unusually rich compile-time language. Those wishing to understand the
OpenVMS listings, to implement applications and systems will gain insight in
to the use and behavior of BLISS. BLISS compilers for OpenVMS VAX, Alpha and
IA64 are available on the OpenVMS freeware CD providing a free, powerful,
and standard implementation language for OpenVMS.
This is the latest update to this popular series of session. VMS has been
around for a long time and in all likelihood, you are NOT running on that
same VAX-11/780 you purchased in 1980. Thanks to the fine engineering that’s
gone into VMS, many VMS migrations truly are non-events. But when business
needs cause changes such as moving entire data centers, requiring modern
disaster recovery from your 1990s Alpha Servers, or jumping over 30 years of
technological changes only to discover that the “build procedures” were not
quite as robust as once imagined, the migrations become much more
interesting. In this updated session, we will discuss several of the more
esoteric OpenVMS migration projects that we have been involved in and how we
made them a success.
For years your systems were built and managed by the same person – we’ll
call him Les. But now Les has retired and you are on your own. You believe
Les has the systems configured with every availability feature, designed to
withstand multiple failures. Les was very good at his job and it all seems
to be working because it’s been a long, long time since you’ve had an
But Les did a lot every day to maintain your systems and insure
availability. Who is doing that now and are you confident it’s being done to
the same extent Les would do it? Your systems are continuing to evolve – are
you confident changes are being implemented to Les’ standards for
availability? As the person now responsible for your OpenVMS systems, do you
have Peace of Mind that if something goes wrong you can restore services and
In a complex environment, there are literally 1,000’s of things that can go
wrong – and they are often the result of neglect, mismanagement or
operational errors – all things that wouldn’t happen under Les’ watchful
eye. While not all are likely to crash the computer or cluster, many of
these problems can cause unplanned outages for portions of your application
– or worse. Are you confident your systems won’t fall victim to one of these
This session discusses proven methodologies and best practice for VMS system
management you need to make sure are implemented to insure your systems
operate the way Les would have them operating.
Still running your business critical applications on aging, difficult to
support VAX or Alpha hardware? You’ve probably heard of CHARON emulation
products that allow you to run VMS on modern, supportable x86 architecture
platforms, but have you heard of the many creative opportunities CHARON
allows for? How about running on virtualized platforms (VMWare)? Connecting
your old VAX to corporate SAN storage? Easily using the corporate backup
solution, automated from VMS? Using your DR site without adding legacy
hardware to it? And so much more… All of these things are easier to
implement than you may think!
Come to this session to learn how to replace those aging VAX or Alpha
servers by virtualizing them with the CHARON Cross-Platform Virtualization
products and take full advantage of the modern technologies in your data
This session is updated using recent real world examples from SCI’s
extensive CHARON customer base, this session will discuss: • A brief
overview of the virtualization architecture and functionality • Discussion
of newest features in the CHARON product set with a focus on the new Linux
utilities. • Virtualizing the host: CHARON on VMware • Integration with a
modern data center. • Deployment strategies including server consolidation
and virtual • OpenVMS clusters • Ease of migration • Disaster tolerance
Only in fairy tales can slow be great and win the race. In real life
commercial computing, rarely is this the case. And just as often, there is a
lot more to the story. It is not magic, though villains and heroes
contribute to the legend and lore. This session covers the identification
and analysis steps used to pinpoint and correct application and system
performance "hot spots" contributing to “slow”.
Topic areas include CPU, IO and contention bound environments including RMS
files, alignment faults, code compilation options, caches, buffering, etc.
Application developers, IT decision makes, and system managers will want to
make sure to attend this session. This is an updated version of a session
from 2015 with additional proof that slow really does only win in Fairy
Understanding the Integrity console can be critical when dealing
with boot problems. It is also used to update firmware and archive the mother board settings. The time to learn about the console
is NOT when the system fails to boot or the mother board had to be replaced. To be prepared attend this session. If you never have to
use it great, but I HIGHLY recommend archiving the motherboard settings because sometimes hardware breaks.
This session will dive into the use of SDA extensions to troubleshoot IP connection problems as well as various performance
issues, such as alignment faults and MP synchronization time. Knowledge of SDA is not a prerequisite, but we will cover what SDA extensions
are and how to identify which ones are available on your system.
This session will provide both a foundational review of VMWare and the specifics that apply to OpenVMS Admins. While VMWare
now only benefits those who run a pre-Integrity emulator (w/ HPE or VSI VMS) this is one of the chosen Virtual Machines for X86 VSI VMS!
Clustering, Networking, Storage options and more!
This session talks about locking as a crucial aspect of OpenVMS cluster performance. OpenVMS records statistics on lock operation rates,
which can tell you which resources in the cluster are the busiest, and thus may be a most fruitful target if they are made the initial focus of investigation.
Any time lock queues develop on resources, those queues can be a great indicator of a potential bottleneck in the system. Tools and techniques to measure lock
activity rates and lock queues are described, as well as a real-life case study.
This session provides answers to a variety of questions about multi-site and disaster-tolernt OpenVMS clusters:
- What things can I do to maximize performance in a multi-site cluster?
- How many sites can I have in a single cluster? and how far apart can my sites be?
- How should I choose node names, choose allocation classes, and number my disk devices, and why?
- When should I use IPCI (IP as a Cluster Interconnect)?
- Should I use different Site Numbers in a multi-site OpenVMS cluster? or use different Read Costs for shadowset members?
- How should I arrange for servers in my disaster-tolerant cluster to start up?
This session describes how Host-Based Volume Shadowing works, and the best practices that help you maximize
performance and availability in shadowed disk configurations. Information is included about shadowing full-copy and full-merge
operations, mini-copy and mini-merge operations, write bitmaps, the Automatic Mini-Copy on Volume Processing (AMCVP) feature,
and when to use different Site IDs and Read Cost values.
This session is a comprehensive overview of the OpenVMS 8.x implementation of
the SAMBA/CIFS file sharing mechanism. While there are many ways to configure
this product to safely and securely share files, this presentation will focus on
integration with Microsoft Active Directory which is the most common
infrastructure at most organizations. All facets of the implementation will be
covered from installation to managing security once part of the Active Directory
domain. Troubleshooting and monitoring techniques will also be discussed and
Performance is often a pivotal concern when evaluating a migration from late
model EV6/EV7 AlphaServers to a CHARON emulated environment. An AlphaServer
GS80, for example, presented highest level performance in the industry when it
was introduced and businesses rely to this day on that capability.
When deploying CHARON emulation, numerous performance experiments are run using
combinations of customer applications and synthetic tests. Results indicate a
balanced configuration where the emulator is faster or slower for different
tests. This information further guides SCI in tuning the customer’s application
environment to ultimately provide performance similar to or superior than the
physical Alpha being replaced.
This in-depth technical session provides insights into various performance
characteristics of CHARON emulation as compared with the original physical
AlphaServer computers. Multiple test case behaviors and results are explained
and examined in relationship to ‘real-world’ application benefits.
This presentation focuses on lower layer LAN/WAN - TCPIP / DecNet / Cluster
troubleshooting using the built in tools of OpenVMS, HP hosting hardware (e.g
Virtual Connect / C7000 enclosures) and the open-sourced network packet analysis
tool, WireShark . Additionally, there will be some discussion of best practices
for provisioning of the network to achieve the highest availability and
performance leveraging OpenVMS constructs and external connectivity
Booting an integrity it quite a bit different from booting a VAX or Alpha.
This presentation discusses how to boot the system in conversational mode (with
a boot option or manually) and how to boot the system manually (assuming you can
access the disk from the console). It will also discus the various ways to
shutdown the system, including multiple ways to force a crash. Also shown will
be the utility to backup the NVRAM on the motherboard, to be restored when you
have to replace the motherboard. This session will include a replication of the
most unusual boot problem he has ever seen, which happened to be on an
Integrity, but could happen on any architecture.
This is a 2 part session. One part is general performance issues such as high
MP sync, alignment faults, high interrupt or kernel mode time. Monitor can
Identify that these events are occurring but what is creating them? SDA
extensions can be very useful in identifying the root cause.
Network performance issues are usually best handled by the network
administrators, however there are a couple of issues that may be identified from
VMS and fixed within VMS.
Using the best disk in a multi-site/multi-SAN shadowset will also be discussed
(i.e. how to manage).
Part 2 is what to do when you have a hung cluster, with the symptom of getting a
username prompt but never a password prompt. This is usually indicative of a
problem accessing the sysuaf file. If the problem is the disk that contains the
sysuaf/rightslist is broken you are out of luck. But if one of the systems has a
lock on the sysuaf then crashing that node will usually fix the problem. Here is
the dilemma, which node do you crash? This is where Availability Manager can
help you identify WHICH node to crash.
This session describes how to measure and improve the health of the network used as an OpenVMS cluster interconnect, primarily using the LANCP and SCACP utilities.
Redundancy in configurations and monitoring redundant components for failures are covered. The session closes with a case study of a recent challenge with cluster interconnect
health at a cluster site.
This session describes tools and techniques to measure and improve performance of the network used a an OpenVMS cluster interconnect. Various factors affecting
performance, including SCS credit waits, parallelism and scaling in network configurations, PEDRIVER transmit window sizes, network packet latency, and saturation of a CPU in interrupt
state, are covered. The session concludes with a case study of a real-world cluster performance issue.
This session provides an update on support for Thin Provisioned volumes in OpenVMS since the earlier session from Boot Camp 2014, describing the new features released
in the VMS84I_SYS-V0700 and VMSA_SYS-V0700 ECO update kits. New DCL command qualifiers, internals of how the new features were implemented, best practices, and known issues, are all described.
This session provides new information learned since the related presentations covering these technical areas from the Boot Camps held in 2009, 2010, and 2011. It starts with
performance recommendations for OpenVMS in general, presents early findings related to cluster interconnect health and
performance, and provides several case studies, including one where
Norm Lastovica from SCI discovered a problematic area in OpenVMS that was later corrected, other cluster
performance issues, and problems stretching a long-distance cluster.
This session describes the Intel Itanium 9500 Series Processor (with code name "Poulson") that formed the basis of the HP Integrity i4 Server line. This new chip provided
twice as many cores per processor chip (8 vs. 4), as well as significantly-higher clock rates of 2.53 gigahertz, compared with 1.73 gigahertz for the earlier 9300 Series (with code name "Tukwila")
processors that formed the basis of the prior i2 generation of HP Integrity Servers. At this point in time, HP had announced (in 2013) that they would not support the i4 Servers with HP OpenVMS,
however, in July of 2014, HP had announced that a new deal with VMS Software, Inc. would allow i4 Servers to be supported thanks to a planned new VSI version of OpenVMS. In May of 2015, VSI
announced version 8.4-1H1, which included support for i4 Servers. This Boot Camp occurred in September of 2015. The session includes some initial performance measurements from i4 Servers.
At the point in time of preparation for this Boot Camp, HP had announced back
in June of 2013 that it would not support i4 Servers with HP OpenVMS. This
presentation was thus prepared with the goal of helping customers make do with
i2 Servers since they would thus be unable to take advantage of the faster i4
Servers. It was only a couple of months before the Boot Camp was held that VMS
Software, Inc. made a deal with HP and announced they would support i4 Servers
with a new VSI version of OpenVMS. So as it turned out, this presentation would
tend to help those who were limited by other reasons, such as budgets, from
acquiring newer and faster hardware.
This session provides an update from the previous year's Boot Camp
presentation of the same title that explored possible future directions for
OpenVMS advocates to explore after HP made its 2013 announcement that it would
not support OpenVMS on i4 or future servers. This presentation describes the
July 31, 2014 announcement of an agreement between HP and VMS Software, Inc. (VSI)
to support i4 Servers with a VSI version of OpenVMS. It describes the options
available OpenVMS customers at that point in time.
This session describes the feature of Thin Provisioning that
was being introduced in various vendors' storage subsystems at the time, and
specifically how OpenVMS would inter-operate with this new feature in the
absence of any special knowledge within OpenVMS about this feature. This
presentation provided the inspiration for a customer's request to HP to provide
better support of Thin Provisioning in OpenVMS, which resulted in additional
features in a future ECO kit.
This Boot Camp was held after HP announced it would not support i4 Servers
with OpenVMS. Some customers were considering this the end of the road for
OpenVMS, and were starting to plan migrations to other platforms. This session
was part of a seminar providing an introduction to Linux system administration
as taught by Rob Eulenstein, and provided an introduction to the Linux operating
system for OpenVMS professionals, including easy ways to play with Linux for
free, either using a Linux Live CD or by running Linux in a free virtual machine
environment under Windows.
This session was another response to the increased interest in
Linux by OpenVMS professionals after HP's 2013 announcement. This session
compares the features and benefits of OpenVMS clusters with the Red Hat Clusters
product on Linux. It describes the relative strengths and drawbacks of Linux
clusters compared with OpenVMS clusters, and describes the different approaches
to quorum schemes and what the products do to avoid a potential "split brain"
(partitioned cluster) scenario.
This session provided an overall strategy for handling
performance problems on OpenVMS, including how to identify and eliminate
bottlenecks, and several keys to successful performance analysis and
troubleshooting. It reviewed OpenVMS performance issues from the past, and how
they were solved. It then summarized the current OpenVMS performance challenges
that customers tend to run into, and provided a deep dive with case studies into
several of these contemporary performance issues.
This session explored possible future directions for OpenVMS advocates to
explore after HP made its 2013 announcement that it would not support OpenVMS on
i4 or future servers. One of the interesting alternatives explored was the
potential development of an open-source clone of OpenVMS, similar to how Linux
and OpenBSD are free and open-source clones of the proprietary UNIX operating
This session explores two case studies of OpenVMS clusters. The first deals
with a performance anomaly causing 6-to-7 second pauses in the cluster,
describes the approaches and troubleshooting steps taken, and the results and
recommendations made. The second study focuses on how to optimally configure IP
as a Cluster Interconnect in a disaster-tolerant OpenVMS cluster.
This session describes the new feature of using IP networks as a cluster
interconnect (IPCI) that was introduced in OpenVMS version 8.4. It starts by
describing the prevous cluster interconnect requirements, and It addresses
several popular myths about IPCI. It describes how to start using IPCI, and the
files and tools and SYSGEN parameters that control the IPCI configuration. It
describes the design alternatives considered by HP OpenVMS Engineering, as well
as the approach chosen, and the changes required in OpenVMS cluster and TCP/IP
Services for OpenVMS code to support this new feature. It concludes with some
preliminary performance results, as well as providing tuning recommendations.
Although OpenVMS Host-Based Volume Shadowing is intended primarily as a tool
to improve availability, it also has performance implications as well. The I/O
algorithms used for reads and writes, both during steady-state operations as
well as during full- and mini- copy and merge operations, are described.
Scattered throughout are best practices and performance tips.
This session gives examples of long-distance disaster-tolerant clusters with
inter-site distances of 3,000 miles and 600 miles, Volume Shadowing across 1,400
miles, and performance problems after implementing a disaster-tolerant cluster
of only 20 miles inter-site distance. Also covered is a case with a quorum node
at a 3rd site and the network connection needs for that quorum node, as well as
disk and node Site IDs to direct shadowed reads from Fibre Channel storage to
read from the local disk instead of the remote disk. The final case discussed
proposed using multiple HP Virtual Machine (HPVM) instances at a central site to
provide one end of disaster-tolerant clusters for 29 separate remote sites.
This session describes what people can do to survive
disasters, and factors involved in risk and survival. Knowing how humans react
physically and psychologically in a crisis can be helpful. Training and
preparation can be key. Personal initiative can make all the difference in a
This session describes what businesses can do to survive disasters, and
factors involved in their survival. It provides case studies of four actual
businesses that experienced disasters, with specifics on how each one was able
OpenVMS Host-Based Volume Shadowing originally limited the
number of disks which could be members of a shadowset to three. OpenVMS version
8.4 introduced the added ability to have four, five, or six members in a
shadowset. This session describes possible uses for and advantages of having
larger shadowsets. It also describes workarounds people used in the past to get
around the original 3-member limit (for the benefit of customers stuck on older
versions). The changes in Shadowing internals needed to support more than 3
members are described in detail. Performance considerations are covered, and
some initial performance numbers are provided.
This session uses a case study of
implementation of a new disaster-tolerant OpenVMS cluster to describe best
practices and principles that will provide the most success in troubleshooting
and problem solving in an OpenVMS environment..
This session discusses the concept of "nines" used as a metric to describe
availability, and describes many best practices with regard to high
availability, such as knowing what components are most likely to fail,
eliminating single points of failure through redundancy, monitoring for failures
of redundant components, persistently finding solutions to problems to prevent
their recurrence, the use of diversity of technologies to improve survivability,
use of a test environment, managing software and firmware versions and released
fixes, managing change through a change control process, minimizing complexity,
and reducing the impact of failures.
Technical session on Disaster Tolerance-related sessions covering the
topics of How the disaster proof OpenVMS cluster recovered so fast and how yours can too, and simulation and testing of long-distance DR/DT configurations.
RMS-related sessions covering the topics of Detecting and Solving Performance Bottlenecks Using Locking Data and Sizing RMS
Global Buffers including
GLOBAL_BUFFER_USAGE.COM, a DCL command procedure to examine RMS Global Buffer usage for purposes of sizing (choosing the appropriate number of
RMS global buffers for a given file), It shows the current, peak, and total number of RMS global buffers for each file which is open on an OpenVMS system. (If the current or
peak number is at or near the total number available, you may need more RMS global buffers; if the peak is no where near the total available, you may have more global
buffers allocated than you really need.)
Bryan Holland, founder and president of SCI, will be presenting on industry standard protocols for securing your Oracle Rdb databases. Drawing on his 20+ years of experience managing 100’s of Oracle Rdb databases Bryan will offer best practice ‘tricks and tips’ to keeping the bad guys out and the auditors happy.
properly configured and managed, Oracle Rdb provides a highly-reliable,
high-performance database environment that is ideally suited for
mission-critical applications where down-time is not an option. However, poor
design, configuration problems or operational management issues can lead to
performance, reliability and availability problems -- and loss of data.
This session focuses on the basics of Rdb database
administration, based on 20+ years of experience supporting Rdb data
abases in a variety of environments. This session provides a guide
to "best practices" for managing Rdb databases, including both what
you MUST do -- as well as what NOT to do. We will cover everything
from the basics of file-placement and backup strategies to more
advanced topics like implementing a high-performance Row Cache and
While specific to Rdb, many of the "best-practices" also apply to
other database engines...
This presentation was given at the 2009 HP Technology Forum & Expo
in Las Vegas, NV USA on June 16, 2009 by
Bryan Holland, President and Founder of SCI
Since the release of OpenVMS on Integrity in early 2005, HP has been promoting
the ease of migration to OpenVMS on Integrity servers. But, is it really that
easy? With proper planning and execution, and some experiences we'll share in
this session, we've found that migrations can be almost as easy as HP promotes.
This session is a study of multiple,
real-world, customer migrations: two mission critical OpenVMS
clusters (1 VAX, 1 Alpha) to 3 Integrity clusters and 5 standalone
VAX servers migrated to standalone Integrity servers. This session
will focus more on the project level aspects, the planning, design,
implementation and management of the migrations, drawing on the
experiences of actual migrations. We'll discuss the processes and
methodologies we've developed doing migrations, the problems we've
encountered and how we solved them, etc.
This will not be
"yet-another-integrity-porting" session -- this is the real thing,
based on real customer experiences.
This presentation was given at the 2009 HP Technology Forum & Expo in Las
Vegas, NV USA on June 18, 2009 by
Brad McCusker, formerly of OpenVMS engineering group. Brad manages SCI's OpenVMS System Services business