HP

Partition Manager Help

English
  Glossary   

Glossary

»Table of Contents
»Index
»Assistance
»Overview
»Features & Capabilities
»About Partitioning
»Starting & Stopping
»Complex Scope
»Partition Scope
»Cell Scope
»I/O Chassis Scope
»Actions
»Status Indicators
»Messages
»Release Notes
»About Partition Manager
Glossary
»Using Help
90th percentile

That utilization value in the selected time interval which 10% of the utilization values fall above, and 90% fall below or are equal to.

activate cell

The process of changing an inactive cell into an active cell. A cell is activated when it is integrated into an nPartition.

activate I/O chassis

The process of changing an inactive I/O chassis into an active I/O chassis. A chassis is activated when the cell to which it is attached is activated.

activated processor

A processor that has been turned on either by the Instant Capacity software or during processor installation. Processors are activated with the icod_modify command (or the vparmodify command in a virtual partition) while HP-UX is running.

active cell

A cell that is available for use by the software running on the nPartition. This implies that the cell's processors and memory (and I/O, if the cell is attached to an active I/O chassis) are all available for use by the OS. An active cell has the following characteristics:

active I/O chassis

An I/O chassis with an initialized link to the system bus adapter (SBA). The SBA link must be initialized for software running on the nPartition to be able to use I/O cards installed in the I/O chassis.

active nPartition

An nPartition is active if at least one of the cells in the nPartition is active.

see also inactive nPartition

add-on system

A system that has been converted to an Instant Capacity system. This process is performed by an HP service representative.

advisory mode

SRD advisory mode lets you see what requests gWLM would make for a compartment without changing its resource allocation.

see also managed mode

see also deploy

allocation

The amount of a resource, such as processor, that gWLM sets aside for a compartment after arbitrating resource requests from the policies for all the compartments.

In managed mode, gWLM makes an allocation available to a compartment. In advisory mode, gWLM reports what the allocation would be without changing resource allocations on a system.

see also entitlement

application

A collection of processes that perform a specific function.

assign cell to an nPartition

A modification of the Stable Complex Configuration Data to change a cell from a free cell to an assigned cell in a specific nPartition. Once assigned to an nPartition, a cell must be activated in order to use the cell's resources.

association
  1. In SIM, an association is created by discovery and identification of SIM system objects that are then associated with other objects. One type of association is containment. For example, clusters contain members, complexes contain nPartitions, and OS images contain resource partitions.

  2. In gWLM, a policy-workload association tells gWLM which policy to use to manage that workload's resource allocation.

available resources

Cells and I/O chassis that are not assigned to an nPartition; or processors, memory, and I/O resources that are not assigned to a virtual partition. These resources are available to be used in new partitions or can be added to existing partitions.

average

The sum of all the utilization values divided by the number of data points for the selected time interval.

backing store

A device accessible to the Integrity VM Host, that maps to a storage device on a virtual machine.

base cabinet

A compute cabinet that can be used as the only compute cabinet in a complex, or as half of a dual compute cabinet complex. A base cabinet is always physically the left cabinet in the pair (when viewed from the front) and is always the cabinet that contains the Service Processor.

see also expansion cabinet

base cell

A cell in a partitionable system. In future versions of partition management software, base cells may be distinguished from other cell types.

BCH

Boot console handler. The system firmware user interface that allows boot-related configuration changes and operations on PA-RISC systems. For example, BCH provides a way to specify boot options and the choice of boot devices. The EFI Boot Manager provides a similar function for Itanium®-based systems.

BIB

Boot-is-blocked. The state of a cell that is powered on but not allowed to boot. BIB exists as soon as power is enabled to a cell, although the system firmware completes its power-on self-test sequence before waiting for BIB to be cleared by the Service Processor. BIB is cleared when the Service Processor is told to boot an nPartition. BIB is also cleared when the system firmware determines that there is no active Service Processor in a complex.

see also ready for reconfiguration

boot console handler
see BCH

boot-is-blocked
see BIB

bound processor

In A.03.x versions of vPars, a bound processor is a processor that can handle interrupts for a virtual partition. Bound processors cannot be migrated from one virtual partition to another if either of the virtual partitions is running. Every virtual partition must have at least one bound processor.

The distinction between bound and unbound processors does not apply to vPars version A.04.x.

see also unbound processor

cabinet

The physical enclosure that contains cells or I/O chassis. A cabinet also includes hardware that provides power and cooling. Some cell-based servers support cabling several cabinets together to form a single complex.

cabinet blowers

The main cooling fans on top of HP Superdome server compute cabinets. They provide the main airflow through the cabinet.

Capacity Advisor

HP Integrity Essentials Capacity Advisor. The VSE Management Software application that is responsible for analysis and planning of workloads on a system or across a set of systems.

capacity planning

The analysis and planning of workloads on a system or across a set of systems.

capacity-planning simulation

The process of combining workload demand profiles, as prescribed by a scenario, to estimate the demand profiles of the systems that contain the workloads. Statistics gathered from the simulation can be summarized in reports.

CC

Cell controller. A chip located on every cell board that has interfaces to the processors and memory on the cell. The cell controller also has an interface to a system bus adapter and to the fabric. The cell controller maintains data coherency across the cells in an nPartition.

cell

A circuit board that contains processors and memory, all controlled by a cell controller (CC). A cell is the basic building block of an nPartition in a complex.

cell controller
see CC

cell local memory
see CLM

cell power on/off

Enable or disable power to a cell. A cell cannot become active until power has been enabled. It must be inactive before power can be disabled. A cell location must be populated in order to enable power. Physical removal of a cell must not occur until power has been disabled.

Powering a cell on or off will also power on or off an I/O chassis that is attached to the cell.

cell-based server

A server in which all processors and memory are contained in cells, each of which can be assigned for exclusive use by an nPartition. Each nPartition runs its own instance of an operating system.

central management server
see CMS

chassis log

Term used for event log on cell-based servers based on the PA-8700 processor.

CLI

Command line interface. An operating system shell for direct entry of commands by the user.

see also GUI

clipping

In gWLM, the limiting of a policy's resource request.

Types of clipping include:

  • Compartment clipping.  A workload's compartment may already be at its maximum size (for example, as set using a vPars command), with policy requests trying to increase it beyond its configured maximum.

  • Policy clipping.  A workload receives the maximum processor allocation allowed based on its policy; however, the request would be higher if the policy maximum were higher.

  • Priority clipping.  There are not enough resources for the compartments at lower priority levels, because of the need to allocate resources for compartments at higher priority levels. Note that resources are allocated for fixed policies, OwnBorrow policies, and policy minimums before gWLM considers priorities.

CLM

Cell local memory. Cell memory that is not interleaved. A page of cell local memory comes from a single cell. Cell local memory provides better performance than interleaved memory for processes running on the processors in the cell that contains the memory.

see also interleaved memory

cluster

A set of two or more systems configured together to host workloads, such that users are unaware that more than one system is hosting the workload.

CMS

Central management server. A system in the management domain that executes the HP Systems Insight Manager software. All central operations within HP Systems Insight Manager are initiated from this system.

codeword

The component licensing mechanism used with Instant Capacity versions B.07.x software. Prior to activating an Instant Capacity component, a right-to-use (RTU) codeword must be applied to an Instant Capacity system. Codewords are obtained from the Utility Pricing Solutions Portal after you purchase a component.

command line interface
see CLI

compartment

An nPartition, virtual partition, virtual machine, or resource partition whose resources are allocated by gWLM.

Multiple compartments are grouped to form a shared resource domain (SRD). The compartments all share the resources of the SRD. A compartment can be in only one SRD. Each compartment holds a workload. gWLM manages each workload's resource allocation by adjusting the resource allocation of its compartment.

compartment consumption

The amount of a resource being consumed by all of the processes in a compartment. For example, if the processes in a compartment consume a total of two processors, the compartment consumption of processors is two.

compartment utilization

The compartment consumption of a given resource as a percentage of the compartment's size. For example, if a compartment's consumption is two processors and its size is four processors, the compartment utilization of processors is 50%.

CompartmentMax

The maximum amount of a resource that a compartment can have. This value is the maximum resource allocation allowed by the underlying compartment. However, gWLM might reduce this number at times because an SRD has a large number of compartments and each compartment must receive a minimum portion of the resources.

see also PolicyMax

CompartmentMin

The minimum amount of a resource that a compartment can have. This value is the minimum resource allocation required by the underlying compartment.

see also PolicyMin

complex

A complex includes one or more cabinets that are cabled together and all of the hardware resources that they contain. A complex has a single Service Processor.

see also server

see also system

complex profile

The data structure managed by the Service Processor that represents the configuration of a complex. The complex profile consists of the Stable Complex Configuration Data for the entire complex, and Partition Configuration Data for each nPartition in the complex.

compute cabinet

Any cabinet containing cells. An I/O expansion cabinet is not a compute cabinet.

configured processor

A processor that has been configured at the boot console handler (BCH or EFI) and is now available for activation by the Instant Capacity software.

constraints

Resource allocation restrictions imposed by either the customer (for example, workload placement restrictions), or the Virtual Server Environment (for example, a cell cannot be subdivided across an nPartition).

see also policy

convergence rate

Indicator of workload sensitivity to changes in processor allocation. Larger values produce larger changes in the allocation, causing faster convergence on the policy's target; smaller values produce slower convergence on the target. The default rate is 1.0.

core

The actual data-processing engine within a processor. A single processor might have multiple cores.

see also processor

core cell

Each nPartition has one cell that system firmware selects at boot time to be the core cell. This cell must be attached to an I/O chassis that contains core I/O. The core cell has the following unique characteristics:

core I/O

I/O hardware that provides the base set of I/O functions required by every nPartition. Core I/O includes the partition console interface and 10/100 BaseT network interface.

core-cell choices

Information in each nPartition's Partition Configuration Data that guides system firmware in choosing the nPartition's core cell. Cells that are identified as core cell choices are tried first (in the order specified) before system firmware applies its default core-cell selection algorithm.

CPU

Central processing unit, or processor.

cross-bar chip
see XBC

current virtual partition

The virtual partition that is running the vPars command currently being executed.

see also local nPartition

custom policy

A policy for managing a workload's compartment. This type of policy allows you to provide your own metric. gWLM then manages an associated workload, adjusting the resource allocation as needed based on how the value of its metric compares to a target you specify. You update values for the metric using the gwlmsend command on the operating system instance where the workload is running.

deactivate cell

The process of changing an active cell into an inactive cell. A cell becomes inactive when a shutdown for reconfiguration operation is performed on its nPartition. A cell can also be deactivated by setting its use-on-next-boot value to No and then performing a reboot for reconfiguration operation on the nPartition.

deactivate I/O chassis

The process of changing an active I/O chassis into an inactive I/O chassis. An I/O chassis is deactivated when the cell to which it is attached is deactivated.

deactivated processor
see inactive processor

deconfigured processor

A processor that has not yet been configured at the boot console handler (BCH or EFI). Instant Capacity and Pay per use software cannot activate a processor that is deconfigured.

demand profile

A set of resource-demand readings made at regular intervals for some period of time. The demand profile of a workload, system, or complex is used when doing capacity planning.

deploy

Enable gWLM control of a shared resource domain (SRD).

Deploying an SRD in managed mode enables gWLM control of resource allocation within the SRD. For example, in an SRD that is based on a virtual partition with processor sets (PSETs) for compartments, deploying an SRD in managed mode allows gWLM to migrate processors between PSETs.

When deploying an SRD in advisory mode, gWLM only reports what the allocation would be without actually affecting resource allocations on a system.

see also undeploy

DIMM

Dual In-line Memory Module, a standard memory-chip format.

discovery
  1. In system management applications, the process of finding and identifying network objects. In HP Systems Insight Manager, discovery finds and identifies all the HP systems within a specified network.

  2. gWLM can examine systems that you specify and automatically identify the nPartitions, virtual partitions, and processor sets (PSETs) that are present on those systems. You then form SRDs based on the discovered nPartitions, virtual partitions, and PSETs.

Dual In-line Memory Module
see DIMM

dynamic processor migration

A vPars feature that allows you to add unbound processors to a virtual partition, or remove them from a virtual partition, while the virtual partition is running.

echelon

A set of DIMMs installed as a single failure group. If any DIMM in the echelon fails or is deconfigured, the entire echelon is deconfigured. Some HP server models use an echelon size of 4 DIMMs; others use an echelon size of 2 DIMMs.

Effective PolicyMax
see PolicyMax

Effective PolicyMin
see PolicyMin

EFI

Extensible firmware interface. The system firmware user interface that allows boot-related configuration changes and operations on Itanium®-based systems. For example, EFI provides ways specify boot options and list boot devices. The boot console handler (BCH) provides a similar function for PA-RISC systems.

entitlement
  1. The amount of a system resource (for example, processor) that is guaranteed to a virtual machine. The actual allocation of resources to the virtual machine may be greater or less than its entitlement depending on the virtual machine's demand for processor resources and the overall system processor load.

  2. The amount of a resource that is set aside for a compartment.

event log

Information about system events made available from the source of the event to other parts of a server complex. An event log indicates what event has occurred, when and where it happened, and its severity (the alert level). Event logs do not rely on normal I/O operation.

The term “chassis log” was used in place of “event log” on earlier server models.

expansion cabinet

A specially configured compute cabinet that can be connected to a base cabinet to create a dual-compute-cabinet complex. The expansion cabinet is always the right-hand cabinet in the pair (when viewed from the front) and contains a hub to connect it to the Service Processor in the base cabinet.

see also IOX

extensible firmware interface
see EFI

fabric

Within a complex, the interconnect composed of cross-bar chips (XBC) and cells.

Fair-Share Scheduler group
see FSS group

field replaceable unit
see FRU

fixed policy

A policy for managing a workload's compartment. This type of policy guarantees that a workload's compartment has a fixed (constant) amount of processor resources.

Fixed policies do not have a settable priority. gWLM satisfies compartment minimums first; next, it satisfies both fixed policies and policy minimums; finally, it satisfies other policy types.

floater processor
see unbound processor

forecast

A prediction of system utilizations and workload demand profiles for some future time.

free cell

A cell that is not assigned to an nPartition. This applies to any cell location, regardless of whether the slot exists or is populated.

FRU

Field replaceable unit. Hardware that can be replaced by a field engineer. This includes all components that are hot-pluggable or hot-swappable. It also includes many components that must be powered off to be replaced.

FRU ID

Data that provides identification information about a field replaceable unit (FRU), such as the part number, serial number, revision and test history. The FRU ID typically is stored in an EEPROM that is located on the FRU.

FSS group

Fair-Share Scheduler group. A group of processes that has its processor allocation managed by the HP-UX FSS service. FSS groups allow you to allocate fractions of processor resources, rather than only whole processors, to the processes in the group.

Global Workload Manager
see gWLM

GNI

Global noninterleaved memory, another name for cell local memory (CLM).

guest OS

A guest operating system is the operating system that is running on a virtual machine.

GUI

Graphical User Interface. A visually-oriented user interface in which components and actions can be selected by clicking on objects and menus instead of typing command lines.

see also CLI

gWLM

HP Integrity Essentials Global Workload Manager. The VSE Management Software application that allows you to centrally define resource-sharing policies that you can use across multiple HP servers. These policies increase system utilization and facilitate controlled sharing of system resources. gWLM's monitoring abilities provide both real-time and historical monitoring of the resource allocation.

HA

High availability. The ability of a server or partition to continue operating despite the failure of one or more components. High availability requires redundant resources, such as processors and memory, in specific combinations.

The high-availability status of a device group is usually indicated by the following notation.

N+  

This device group can experience a device failure and still function normally.

N  

This device group has just enough good devices to function normally. Subsequent failure of a device in the group can cause the cabinet to shut down.

N-  

This device group does not have enough good components to function normally. If a cabinet is running and goes into an N- cooling state, then the cabinet is automatically shut down. If a cabinet has an N- power state, then devices in the group cannot be powered on. This means that if the cabinet is running, it continues running, but no additional devices can be powered on. If the cabinet is off and comes up in the N- power state, then none of its devices can be powered on.

hard reset

A hard reset, like the reset (RS) command available at the Service Processor prompt, immediately stops the operating system and all applications, without forcing a crash dump.

see also TOC

high availability
see HA

host
  1. A system or partition that is running an instance of an operating system.

  2. The physical machine that is the VM Host for one or more virtual machines.

host name

The name of a system or partition that is running an OS instance.

host OS

The operating system that is running on the host machine.

hot-pluggable

A hardware component that can be added to or removed from a cabinet, with software intervention, while the cabinet remains operational. Examples are PCI I/O cards, cells, and I/O chassis.

These components are hot-pluggable only to the extent that operating system and hardware support is present.

see also hot-swappable

see also FRU

hot-swappable

A hardware component that can be added to or removed from a cabinet, without software intervention, while the cabinet remains operational. Examples are bulk power supplies, cabinet blowers, and I/O fans. These items are hot-swappable if their removal does not create an N-1 HA situation. For example, if a cabinet's power status is N+1, then any one of the bulk power supplies can be removed without affecting the operation of the cabinet.

see also hot-pluggable

see also FRU

hyper-threading

Intel® Hyper-Threading Technology. The ability of certain processors to create a second virtual core that allows additional efficiencies of processing. This is not a true multi-core processor, but it adds performance benefits. True multi-core processors typically deliver much greater performance than equivalent hyper-threading technology.

I/O bay

The physical location in a cabinet where an I/O support structure is located.

I/O chassis

A PCI or PCI-X card cage and associated backplane that contains a system bus adapter and one or more local bus adapters. An I/O chassis may or may not be physically removable.

I/O chassis enclosure
see ICE

I/O Dependent Code
see IODC

I/O expansion cabinet
see IOX

I/O fans

The fans that are used to cool an I/O chassis. Found in both I/O expansion cabinets and compute cabinets. I/O fans are distinct from cabinet blowers.

I/O support structure

A physical structure in cabinets where one or more I/O chassis are located. In some cabinets the I/O support structure is referred to as an I/O support tray, in other cabinets as an I/O chassis enclosure (ICE). The different names reflect the different physical characteristics of the support structures. The I/O support structure is removable in some cabinet types (for example, I/O expansion cabinet) and is not removable in others.

iCAP

Instant Capacity. The HP Utility Pricing Solutions product whose pricing model is based on purchasing components (processors, cell boards, and memory). With Instant Capacity you initially purchase a specified number of activated components and pay a right-to-access fee for a specified number of deactivated (iCAP) components. To activate a component, you purchase the component and license it through the application of a codeword.

Previous versions of iCAP were referred to as Instant Capacity on Demand, or iCOD.

iCAP component

Instant Capacity component (also referred to as an unlicensed component). An iCAP component is a processor, cell board, or memory that is physically installed in an iCAP system but is not authorized for use. Before it can be used, a right-to-use (RTU) must be purchased and a codeword must be applied to the system.

iCAP processor

Instant Capacity processor (also referred to as an unlicensed processor). A processor that is physically installed in an iCAP system but is not authorized for use and is inactive. After licensing, iCAP processors can be turned on during installation, or later by the Instant Capacity software. Licensed processors are activated with the icod_modify command (or the vparmodify command in a virtual partition).

ICE

I/O chassis enclosure. A specific type of I/O bay on some models of HP Superdome server. An ICE provides mechanical and electrical support for up to two 12-slot I/O chassis.

iCOD
see iCAP

iCOD component
see iCAP component

iCOD processor
see iCAP processor

inactive cell

A cell that is not available for use by software running on an nPartition. This term is usually used to describe a cell that has the following status (though any cell that is not active is by definition inactive).

  • The slot is present and is populated.

  • Power is enabled.

  • The cell is assigned to an nPartition.

see also active cell

inactive I/O chassis

An I/O chassis that is not available for use by the software that is running on an nPartition. An I/O chassis is inactive when it is attached to an inactive cell.

see also active I/O chassis

inactive nPartition

An nPartition in which all of its cells are inactive.

see also active nPartition

inactive processor

A processor in an iCAP system that is currently inactive. Licensed inactive processors can be activated by the icod_modify command (or by the vparmodify command in a virtual partition). An inactive processor is also referred to as a deactivated processor.

see also activated processor

see also iCAP processor

initial system loader
see ISL

Instant Capacity
see iCAP

Instant Capacity component
see iCAP component

Instant Capacity processor
see iCAP processor

Integrity Virtual Machines Manager
see VM Manager

Intelligent Platform Management Interface
see IPMI

interleaved memory

Memory that can be interleaved across more than one cell. Interleaved memory presents a single logical memory address range that is mapped to different physical memory ranges across multiple cells.

see also CLM

IODC

I/O Dependent Code. IODC provides a uniform, architected mechanism to obtain platform information. IODC is composed of two parts. The first part is a set of up to 16 bytes that identify and characterize hardware modules. The second part is a set of entry points that provide a standard procedural interface for performing module-type dependent operations such as boot device, keyboard, and display device initialization and Input/Output routines. IODC is documented in the PA-RISC 1.1 I/O Firmware Architecture Reference Specification. NOTE: this link will take you outside of this help system. Your browser must have access to the internet to follow this link.

IOX

I/O expansion cabinet. A cabinet that contains I/O devices (card cages) but no cells.

see also expansion cabinet

IPMI

Intelligent Platform Management Interface. A set of standards for remote multiplatform server management. IPMI uses intelligent platform management hardware and a message-based interface.

ISL

Initial system loader. This program implements the portion of the bootstrap process that is independent of the operating system (OS). The ISL is loaded and executed after self-test and initialization have completed successfully. It provides an interface to select an OS or load a predefined default OS.

Itanium®-based systems

Systems built on any version of the Intel® Itanium® architecture.

LBA

Local bus adapter. A device that connects the system bus adapter (SBA) to an I/O bus, such as PCI. Multiple LBAs are connected to a single SBA.

leaf node

An object at the lowest level of a graphical tree view. Leaf nodes have no child nodes.

local bus adapter
see LBA

local nPartition

Used in a context where an nPartition command is being executed, the local nPartition is the nPartition that is running the command.

see also current virtual partition

see also remote nPartition

LTU

License to use. One of the three main components of gWLM: CMS, agents, and LTU for each agent. The CMS allows you to control and monitor gWLM. The agents run on the systems where you are managing workloads. You install an LTU on each system that runs an agent in order to continue full agent functionality beyond the initial trial period.

managed mode

SRD managed mode lets gWLM automatically adjust the resource allocations for your compartments.

see also advisory mode

see also deploy

managed resource

A resource that can be allocated and controlled by HP Integrity Essentials Virtualization Manager. Managed resources include: processors, memory, disks, and I/O bandwidth.

managed system

A server or other system that can be managed by SIM from a CMS . A managed system can be managed by more than one CMS.

managed workload

A workload that is managed by Global Workload Manager (gWLM).

management domain

A CMS and its managed systems.

Management Processor
see Service Processor

master I/O backplane

The main backplane in a complex into which you plug an I/O chassis.

max 15–min

Maximum 15–minute sustained: this is the highest value in the selected time interval that was sustained for at least 15 minutes.

measured value

The current value of a metric being used in a policy.

memory echelon
see echelon

metric

A specific measurement that defines a performance characteristic.

metric view selection

In Capacity Advisor, a combination of the statistical model (such as peak or average) used to calculate the metric and whether it is to be presented as a percentage or an absolute value.

migrating processors

The process of activating and deactivating processors across partitions for load balancing.

monarch processor

The main controlling processor of the operating system, designated CPU 0.

monitored workload

A workload that can be monitored by Virtualization Manager but has no policy associated with it. Monitored workloads are not managed by Global Workload Manager (gWLM).

multithreading

The ability of an application and operating system to allow parallel computing by dividing processing between multiple processors or cores.

node
see system

nPartition

A partition in a cell-based server that consists of one or more cells, and one or more I/O chassis. Each nPartition operates independently of other nPartitions and either runs a single instance of an operating system or is further divided into virtual partitions.

nPartitions can be used as compartments managed by gWLM as long as several requirements are met. Refer to the gWLM online help for a description of nPartition requirements.

see also virtual partition

nPartition Configuration Privilege

A feature available on newer cell-based servers that can be used to restrict the ability of privileged users on one nPartition from affecting other nPartitions. This feature is configured via the PARPERM command at the Service Processor command interface. For more information, refer to the configuration privilege topic in the Partition Manager help.

nPartition Provider

The WBEM services provider for nPartition information about cell-based servers.

nPartition server
see cell-based server

online activation

The ability to activate a deactivated processor using Instant Capacity (iCAP) software while HP-UX is running. No reboot is required. This is done with the icod_modify command or, in a virtual partition, with the vparmodify command. Online activation is the default behavior of iCAP.

OS

Operating system.

OwnBorrow policy

A policy for managing a workload's compartment. This type of policy allows you to set the following values:

  • The minimum amount of processor resources that a compartment should ever have.

  • The maximum amount of processor resources that a compartment should ever have.

  • The amount of processor resources that a compartment owns.

A compartment is guaranteed to have the resources it owns when they are needed. When a workload is not busy, gWLM may lend its processor resources to other workloads that are busy, as long as the compartment minimum is maintained. When it becomes busy, a compartment will immediately re-acquire any resources that were loaned to other compartments. A compartment with a busy workload can borrow processor resources up to its allowed maximum, if resources are available from other compartments.

You can assign a weight to an OwnBorrow policy in order to prioritize resource allocation.

owned resources

Resources that are guaranteed to a compartment when they are required. For example, a compartment is guaranteed its owned processor resources when they are needed. A compartment can lend its owned resources to other compartments.

PACI

Partition console interface. Provides console access for an nPartition. PACI is a part of core I/O.

parked workload

A workload that is not currently associated with a system. A workload becomes parked if its system is set to “none” when it is created or later modified. A parked workload that was previously associated with a system may have historical data associated with it from Capacity Advisor or gWLM. As with any workload, the historical data will be lost if the workload is deleted.

When migrating a workload from one system to another, it may be useful to park the workload (removing the association with the original system) until the new system becomes available. This preserves the historical data for the workload across the migration.

partition
  1. A subset of server hardware that includes processor, memory, and I/O resources on which an operating system (OS) can be run. This type of partitioning allows a single server to run an OS independently in each partition with isolation from other partitions.

  2. A resource partition, made up of either an FSS group or a processor set, that runs within a single OS. This type of partitioning controls resource allocations within an OS.

see also nPartition

see also virtual partition

Partition Configuration Data
see PCD

partition console interface
see PACI

partition database
see vPars partition database

Partition Manager

The VSE Management Software application that is responsible for managing and configuring nPartitions on cell-based servers.

partition name

An ASCII string that identifies a partition using a name that is meaningful to the system administrator. The allowed characters and maximum length are different for nPartition and virtual partition names. For nPartitions, partition names do not have to be unique, because the partition number provides a unique partition identifier. Virtual partition names must be unique within the nPartition or server that is running vPars.

partition number

An integer that uniquely identifies an individual nPartition within a complex. Each nPartition is assigned a unique number from 0 to the maximum number of partitions supported minus 1.

partition stable store
see PCD

Pay per use
see PPU

PCD

Partition configuration data. The part of the complex profile that provides partition-specific information. The PCD can be thought of as an array with one element per possible partition indexed by partition number. PCD provides the functionality of stable store in traditional systems.

PCI

Peripheral component interconnect. A standard for the connection between a processor and attached devices.

PCI-X

Peripheral component interconnect extended. An enhanced version of PCI.

PDC

Processor-dependent code.

see also system firmware

PDH

Processor-dependent hardware. The ROM, nonvolatile memory, and PDH controller interface for a cell board. The PDH comprises a controller and its external Flash EPROM, battery-backed SRAM, real-time clock, and external registers.

peak

The highest utilization value in the selected time interval.

peripheral component interconnect
see PCI

policy

A collection of rules and settings that control workload resources. For example, a policy can indicate the minimum and maximum amount of processor resources allowed for a workload, and a target to be achieved.

A single policy can be associated with multiple workloads.

policy pass/fail

A policy can either succeed or fail to meet its target. A failure can be due to clipping of the policy's resource requests.

PolicyMax

The maximum amount of a resource, such as number of processors, for a compartment as specified in that policy's definition.

In graphs, the Effective PolicyMax is shown. This value is the smaller of PolicyMax and CompartmentMax (the maximum amount of a resource that a compartment can have).

PolicyMin

The minimum amount of a resource, such as number of processors, for a compartment as specified in that policy's definition.

In graphs, the Effective PolicyMin is shown. This value is the larger of PolicyMin and CompartmentMin (the minimum amount of a resource that a compartment can have).

PPU

Pay per use. An HP software product that is a part of the HP Utility Pricing Solutions program. PPU implements a pricing model in which you are charged for the processor usage. You acquire a specific hardware platform and number of processors, and are charged for usage of the processors based on system demand.

PPU agent

The Pay per use (PPU) software component that provides information to the utility meter. On HP-UX systems this component is implemented as a daemon named (ppud). On Microsoft® Windows® systems, this component is implemented as a service.

priority

The importance of a policy, relative to other policies, as defined by the user. The highest priority is 1. Lower priorities are 2, 3, and so on through 1000.

Global Workload Manager (gWLM) uses priorities to determine the order in which to allocate resources when the sum of the resource requests exceeds the resources available in the SRD.

Fixed policies do not have priorities; their resources are allocated before priorities are evaluated.

If all resource requests have been met and resources are still available, the weight assigned to each policy, not its priority, determines how the excess resources are distributed.

process

Execution of a program or image file. Execution can represent a user or operating system process.

processor

The hardware component that plugs into a processor socket. Processors can contain more than one core.

see also core

processor module

The packaging of one or more processors to connect into a single socket on the system bus. Examples include the Intel® Xeon® FC-mPGA package, the HP mx2 dual-processor module, and the IBM Power 5 MCM.

processor set
see PSET

processor-dependent hardware
see PDH

profile viewer

The Profile Viewer provides a visual display of historical utilization data collected by Capacity Advisor along with additional information you have provided. The Profile Viewer also enables you to examine different time intervals and different categories of data.

PSET

A collection of processors grouped together for exclusive access by applications assigned to that processor set. Each application runs only on processors in its assigned processor set. On Linux systems gWLM simulates PSETs by using processor affinity masks.

Quality of Service (QoS)

A combination of qualitative and quantitative factors such as up time, response time, and available bandwidth, that collectively to describe how well a system performs. The Quality of Service is frequently embodied in a Service Level Agreement or in a set of Service Level Objectives between or among organizations.

ready for reconfiguration

The state of a cell location that permits its nPartition assignment to be changed. All cell locations whose nPartition assignment is changed must be at the ready for reconfiguration state before the Service Processor can push out the new Stable Complex Configuration Data. A cell location is in the ready for reconfiguration state when any of the following conditions applies.

  • The cell location is not present.

  • No cell is present at that location.

  • The cell is not powered on.

  • The cell is inactive (usually, a cell that is powered on with the boot-is-blocked attribute set).

reboot for reconfiguration

The process of rebooting an nPartition in such a way that all active cells in the nPartition are reset with boot-is-blocked (BIB) set. When the operating system running on the nPartition has finished shutting down, these cells begin their power-on self-test sequence, then wait for BIB to be cleared by the Service Processor. When all of the cells in the nPartition complete self-test, the Service Processor boots the nPartition.

On the HP-UX operating system, reboot for reconfiguration is performed using the reboot or shutdown command with the -R option. The -H option should not be used, so that the nPartition will automatically reboot after reconfiguration.

On Linux and Microsoft Windows operating systems, the normal reboot process performs reboot for reconfiguration.

see also shutdown for reconfiguration

remote nPartition

In a context where an nPartition command is being executed, a remote nPartition is any nPartition other than the one that is running the command.

see also local nPartition

request

The amount of a system resource that a policy asks gWLM to give to the policy's compartment. Each policy makes a request, then gWLM arbitrates the requests from all of the policies to determine what resources will be allocated to the compartments. Requests may be restricted by policy settings and by the compartment definition. For example, if a PolicyMin value is less than a CompartmentMin value, the CompartmentMin value is used instead of the PolicyMin value.

see also custom policy

see also fixed policy

see also OwnBorrow policy

see also utilization policy

resource partition

A subset of the resources available to an operating system instance, isolated for use by specific processes. A resource partition has its own process scheduler. CPU resources in the partition may be allocated using Fair-Share Scheduler groups or processor sets. Policies for controlling the allocation of resources to the partition may be set using Global Workload Manager (gWLM).

resource pool

A set of systems to consider as the possible location of a workload.

See also shared resource domain (SRD), boundaries within which resources can be allocated and balanced across workloads.

ResPar
see resource partition

right-to-access
see RTA

right-to-use
see RTU

RTA

Right-to-access. The initial fee that you pay to enter the Instant Capacity (iCAP) program and physically acquire possession of an iCAP component (memory, cell board, or processor) that is unauthorized for use and inactive.

RTU

Right-to-use. The fee that you pay to license an iCAP component (memory, cell board, or processor). The right-to-use authorizes you to obtain a codeword to activate Instant Capacity components. The amount paid for this is called the activation fee or enablement fee.

SBA

System bus adapter. The chip in an I/O chassis that provides a connection between the cell controller on a cell and the set of local bus adapters in the I/O chassis.

SBA link

A link from an I/O chassis to its system bus adapter.

SCCD

Stable Complex Configuration Data. The portion of the complex profile that contains attributes of the complex (serial number, model string, and so on) and the assignment of cells to nPartitions.

scenario

A possible configuration of systems and workloads under consideration when doing capacity planning.

see also what-if scenario

secure compartment

A boundary that provides security to a compartment by controlling access and system capabilities available to a set of processes.

secure resource partition

A resource partition that is integrated with HP-UX Security Containment.

server
  1. Physical server:  Hardware that can run one or more operating systems, including a partitionable complex. Also, hardware that can run an instance of the vPars monitor. Server hardware includes one or more cabinets containing all the available processors, memory, I/O, and power and cooling components. HP Integrity servers include two types of server hardware: standalone servers and cell-based servers.

  2. Virtual server:  A software-based virtual environment that can run an operating system. A virtual server includes a subset of the server hardware resources, including processors, memory, and I/O. Virtual servers may be virtual partitions under vPars or virtual machines under HP Integrity Virtual Machines.

  3. HP Systems Insight Manager uses the term “server” for any standalone server, nPartition, or virtual server that is running an instance of an operating system or an instance of the vPars monitor.

see also system

Service Processor

An independent support processor for HP servers that support nPartitions. The Service Processor provides a menu of service-level commands, plus commands to reset and reboot nPartitions and configure various parameters.

The Service Processor in HP servers is sometimes called the Management Processor (MP) or the Guardian Service Processor (GSP).

shared resource domain
see SRD

shutdown for reconfiguration

The process of shutting down an nPartition in such a way that all active cells in the nPartition are reset with the boot-is-blocked (BIB) attribute. When the operating system that is running on the nPartition has finished shutting down, these cells begin their power-on self-test sequence and then wait for BIB to be cleared by the Service Processor. As a result, the nPartition becomes inactive.

On the HP-UX operating system, shutdown for reconfiguration is performed using the shutdown or reboot commands with the -R and -H (or -RH) options.

On the Linux operating system the command shutdown -h now performs shutdown for reconfiguration.

On Microsoft Windows operating systems the shutdown /h command performs shutdown for reconfiguration.

see also reboot for reconfiguration

SIM

HP Systems Insight Manager. The platform and framework on which the VSE Management Software products are deployed.

simulation
see capacity-planning simulation

simulation interval

For Capacity Avisor, a combination of a duration and a starting or ending point that defines the period of time over which the simulation is to be done.

Single System Management
see SSM

size

The amount of a resource that a compartment actually has.

When working with processor resources, size can differ from the actual allocation when gWLM is deployed in advisory mode.

SRD

Shared resource domain. A collection of compartments that share system resources. The compartments can be nPartitions, virtual partitions, virtual machines, processor sets (PSETs), or Fair-Share Scheduler (FSS) groups.

A server containing nPartitions can be an SRD as long as nPartition requirements are met. These requirements are detailed in the gWLM online help topic Getting the most out of gWLM.

A server or an nPartition divided into virtual partitions can be an SRD for its virtual partition compartments. A VM Host can be an SRD to its virtual machines. Similarly, a server, an nPartition, or a virtual partition containing PSETs can be an SRD for its PSET compartments. Finally, a server, an nPartition, or a virtual partition containing FSS groups can be an SRD for its FSS-group compartments.

A complex with nPartitions can hold multiple SRDs. For example, if the complex is divided into nPartitions named Par1 and Par2, Par1's compartments could be virtual partitions, while Par2's compartments are PSETs.

see also deploy

see also advisory mode

see also managed mode

SRD states

An SRD can be in one of two states: deployed or undeployed. When deployed, an SRD can be in one of two modes: advisory mode or managed mode.

SSM

Single System Management. A method of viewing and managing systems without the use of a central management server (CMS). In the SSM model, administrators log in to the system to be managed and use the management tools directly on that system. This is different than the CMS based management model, in which administrators log in to the CMS, and use management tools on the CMS that contact the managed systems.

Stable Complex Configuration Data
see SCCD

standalone server

Hardware that can run one or more operating systems but does not support dividing hardware resources into nPartitions.

system
  1. A server, nPartition, virtual partition, or virtual machine that is running an instance of an operating system.

  2. Entities on the network that communicate through TCP/IP or IPX. To manage a system, some type of management protocol (for example, SNMP, DMI, or WBEM) must be present on the system. Examples of systems include servers, workstations, desktops, portables, routers, switches, hubs, and gateways.

see also server

system bus adapter
see SBA

system firmware

Code that provides a uniform, architected context in which to perform processor-dependent operations. Also called processor-dependent code (PDC) on PA-RISC systems. On Itanium®-based systems, system firmware includes PAL (Processor Abstraction Layer), SAL (System Abstraction Layer), EFI (extensible firmware interface), and ACPI (Advanced Configuration and Power Interface).

Systems Insight Manager
see SIM

target

The value that drives a policy, thereby influencing its resource requests to gWLM.

For a target processor utilization, gWLM attempts to keep a workload's processor utilization below the target by adding processor resources when the workload is using too much of its current processor allocation. For example, assume a workload has a utilization policy with a target of 80% and a size of 5 processors. If the workload is consuming 4.5 processors, its utilization percentage is 4.5/5, or 90%. The gWLM software attempts to allocate additional processor resources to the workload to meet the target. A size of 6 processors results in a utilization percentage of 4.5/6, or 75%, thus meeting the target.

A target can also be a value the workload should stay above, such as x transactions per second. In this case, adding resources helps the workload maintain the number of transactions.

Temporary Instant Capacity
see TiCAP

TiCAP

Temporary Instant Capacity. An HP product that enables customers to purchase prepaid processor activation rights, for a specified (temporary) period of time. Temporary capacity is sold in 30-processor-day increments. TiCAP was formerly referred to as “TiCOD”.

TiCOD
see TiCAP

TOC

Transfer of control. A soft reset, which terminates the operating system and all applications, and causes a crash dump to be saved to the dump device, if one is defined.

see also hard reset

transfer of control
see TOC

unassign a cell

Modify the Stable Complex Configuration Data so that a cell is no longer assigned to an nPartition and is instead a free cell. A cell must be inactive before it can be unassigned. If the cell was not inactive before the unassignment operation, then the operation will not be complete until the nPartition has performed a reboot for reconfiguration.

unassigned cell
see free cell

unbound processor

In A.03.x versions of vPars, an unbound processor is a processor that can be migrated between virtual partitions while those partitions are running. Unbound processors cannot handle I/O interrupts. Unbound processors are sometimes referred to as “floater processors”.

The distinction between bound and unbound processors does not apply to vPars version A.04.x.

see also bound processor

undeploy

Change the shared resource domain (SRD) state to disable gWLM's management of system resources in a specified SRD.

If an SRD is in managed mode, undeploying stops the migration of system resources between compartments in the SRD. If the SRD is in advisory mode, undeploying stops gWLM from providing information about the requests that would have been made.

see also deploy

usage database

The HP repository that contains Pay per use system-utilization information. You can access this information through the Utility Pricing Solutions Portal.

use-on-next-boot

A per-cell flag in the Partition Configuration Data. This flag is used by system firmware during the process of booting an nPartition. If a cell is assigned to an nPartition and this flag is not set, then the cell is not activated the next time that the nPartition is booted.

utilities subsystem

The utilities subsystem provides the platform management infrastructure for a complex. Its features and services are accessible through the Service Processor user interface, Partition Manager, and other platform management tools. It includes the following components:

utility meter

The software and hardware device that receives Pay per use system-utilization information from the Pay per use software. The utility meter is initially installed and configured by an HP service representative.

Utility Pricing Solutions Portal

An HP Web site that gives customers an interface to view their Pay per use system-utilization information and to obtain codewords for Instant Capacity (iCAP) systems.

utilization policy

A policy for managing a workload's compartment. This type of policy has a target based on utilization. With a processor utilization policy, gWLM attempts to keep a workload's processor utilization below the target by adding processor resources when the workload is using too much of its current processor allocation. For example, assume a workload has a utilization policy with a target of 80% and an allocation of 5 processors. If the workload is consuming 4.5 processors, its utilization percentage is 4.5/5, or 90%. The gWLM software attempts to allocate additional processor resources to the workload to meet the target. An allocation of 6 processors would result in a utilization percentage of 4.5/6, or 75%, thus meeting the target.

You can set a priority for utilization policies to ensure that gWLM attempts to satisfy the policies in a particular order. The highest priority is 1; lower priorities are 2, 3, and so on, through 1000. You can also set a weight for a utilization policy.

Utilization Provider

The WBEM services provider for real-time utilization data from managed systems.

VFP

Virtual Front Panel. An interface provided by the Service Processor that displays the boot/run state of nPartitions.

virtual console
  1. A vPars feature that allows a single hardware console port to be used as the console for multiple virtual partitions.

  2. The virtualized console of a virtual machine that emulates the functionality of the Management Processor interface for HP Integrity servers. Each virtual machine has its own virtual console, from which the virtual machine can be powered on or off and booted or shut down, and from which the guest operating system can be selected.

virtual device

An emulation of a physical device. This emulation, used as a device by a virtual machine, effectively maps a virtual device to an entity (for example, backing store) on the VM Host.

Virtual Front Panel
see VFP

virtual machine

A software entity provided by HP Integrity Virtual Machines. This technology allows a single server or nPartition to act as a VM Host for multiple individual virtual machines, each running its own instance of an operating system (referred to as a guest OS). Virtual machines are servers in the Virtual Server Environment (VSE).

virtual machine application (VM_app)

The executable program on the VM Host that manifests the individual virtual machine. It communicates with the loadable drivers based on information in the guest-specific configuration file, and it instantiates the virtual machine.

virtual machine console
see virtual console

virtual machine host
see VM Host

virtual partition

A software partition of a server, or of a single nPartition, where each virtual partition can run its own instance of an operating system. A virtual partition cannot span an nPartition boundary.

see also nPartition

see also virtual machine

virtual partition scan

A scan of the system to determine the allocation and status of processor, memory, and I/O resources in a vPars-enabled system.

virtual partition server

A specific layer, analogous to but not an operating system, that supports virtual partitions.

Virtual Server Environment
see VSE

virtual switch
see vswitch

Virtualization Manager

HP Integrity Essentials Virtualization Manager. Virtualization Manager provides hierarchical visualization of servers and workloads, with seamless access to the management tools of the VSE technologies.

VM
see virtual machine

VM Host

An HP Integrity server running HP-UX with the HP Integrity Virtual Machines software installed. Virtual machines are manifested as processes executing on the VM Host. Configuration, management, and monitoring of virtual machines is performed on the VM Host.

VM Manager

HP Integrity Virtual Machines Manager. The VSE Management Software application that is responsible for managing and configuring HP Integrity Virtual Machines.

vPars

An HP software product that provides virtual partitions.

see also virtual machine

vPars monitor

The program that manages the assignment of resources to virtual partitions in a vPars-enabled system. To enable virtual partitions, the vPars monitor must be booted in place of a normal HP-UX kernel. Each virtual partition running under the monitor then boots its own HP-UX kernel.

The vPars monitor reads and updates the vPars partition database, boots virtual partitions and their kernels, and emulates certain firmware calls.

see also VM Host

vPars partition database

The database that contains the configuration information for all the virtual partitions on a vPars-enabled system.

VSE

The HP Virtual Server Environment (VSE) is an integrated virtualization offering for HP-UX servers, providing a flexible computing environment that maximizes utilization of server resources. VSE consists of a pool of dynamically sizable virtual servers, each of which can grow and shrink based on service-level objectives and business priorities.

vswitch

Virtual switch. Refers to both a dynamically loadable kernel module (DLKM) and a user-mode component implementing a virtual network switch. The virtualized network interface cards (NICs) for guest machines are attached to the virtual switches.

way

An older term that describes the number of processors in a symmetric multiprocessing (SMP) system (for example, “4-way”.) This term is replaced by processor. (For example, “4-processor”.)

WBEM

Web-Based Enterprise Management. A set of Web-based information services standards developed by the Distributed Management Task Force, Inc. A WBEM provider offers access to a resource. WBEM clients send requests to providers to get information about and access to the registered resources.

see also nPartition Provider

see also Utilization Provider

Web-Based Enterprise Management
see WBEM

weight

A value that you assign to a policy to determine how system resources are allocated by gWLM in the following scenarios:

  • Global Workload Manager addresses priority levels from highest to lowest, allocating system resources to all requests at a given priority level before considering lower-priority requests. If requests cannot be satisfied at some priority level, the remaining resources are distributed so that the total resource allocation for each workload is as close as possible to the proportion of its weight relative to the sum of all the weights.

  • If gWLM has satisfied all system resource requests at all priorities and there are resources still to be allocated, gWLM will distribute the remaining resources by weight.

what-if scenario

A configuration of systems and workloads that is different from the current configuration. Capacity-planning simulations are run using what-if scenarios as experiments before making an actual configuration change.

wizard

A sequential series of pages that transforms a complex task into simple steps and guides you though them. The wizard makes sure that you provide all of the required information and do not skip any steps. At each step, a page is presented that allows you to specify the information needed to complete that step. Help is available at each step and you always have the option of going back to continue the wizard from a previous step.

workload

The collection of processes in a standalone server, nPartition compartment, virtual partition compartment, or virtual machine compartment. Global Workload Manager (gWLM) extends this concept to include processor set (PSET) compartments and FSS group compartments. Global Workload Manager enables you to monitor and manage workloads by automatically adjusting the resource allocations of their compartments based on policies.

see also managed workload

see also monitored workload

XBC

Cross-bar chip. On some server models each cell in a compute cabinet plugs into a cross-bar backplane by means of a pair of connectors, thereby forming a connection between the cell controller on the cell board and a cross-bar chip. On other server models, cell controllers are directly connected to other cell controllers, thereby eliminating the need for a cross-bar backplane.