Friday, November 23, 2007

NAS and SAN storage operating system

Openfiler
NAS and SAN storage operating system

Description

Openfiler is a powerful, intuitive browser-based networkstorage software distribution. Openfiler delivers file-based NetworkAttached Storage and block-based Storage Area Networking in a singleframework.

Openfiler sits atop of CentOS Linux (which is derived from sources freelyprovided to the public by a prominent North American Enterprise Linuxvendor). It is distributed as a stand-alone Linux distribution. The entiresoftware stack interfaces with third-party software that is all opensource.

File-based networking protocols supported by Openfiler include: NFS,SMB/CIFS, HTTP/WebDAV and FTP. Network directories supported by Openfilerinclude NIS, LDAP (with support for SMB/CIFS encrypted passwords), ActiveDirectory (in native and mixed modes) and Hesiod. Authentication protocolsinclude Kerberos 5.

Openfiler includes support for volume-based partitioning, iSCSI (targetand initiator), scheduled snapshots, resource quota, and a single unifiedinterface for share management which makes allocating shares for variousnetwork file-system protocols a breeze.

Features currentlyavailable in Openfiler.

Powerful block storage virtualization

Full iSCSI target support, with support for virtual iSCSI targets for optimal division of storage
Extensive volume and physical storage management support

Support for large block devices

Full software RAID management support

Support for multiple volume groups for optimal storage allocation

Online volume size and overlying filesystem expansion

Point-in-time snapshots support with scheduling

Volume usage reporting

Synchronous / asynchronous volume migration & replication (manual setup necessary currently)

iSCSI initiator (manual setup necessary currently)

Extensive share management features

Support for multiple shares per volume

Multi-level share directory tree

Multi-group based access control on a per-share basis

Multi-host/network based access control on a per-share basis

Per-share service activation (NFS, SMB/CIFS, HTTP/WebDAV, FTP with read/write controls)

Support for auto-created SMB home directories

Support for SMB/CIFS "shadow copy" feature for snapshot volumes

Support for public/guest shares

Accounts management

Authentication using Pluggable Authentication Modules, configured from the web-interface

NIS, LDAP, Hesiod, Active Directory (native and mixed modes), NT4 domain controller

Guest/public account support

Quota / resource allocation

Per-volume group-quota management for space and files

Per-volume user-quota management for space and files

Per-volume guest-quota management for space and files

User and group templates support for quota allocation

Other features:

UPS management support

Built-in SSH client Java applet

Full industry-standard protocol suite

CIFS/SMB support for Microsoft Windows-based clients

NFSv3 support for all UNIX clients with support for ACL protocol extensions

NFSv4 support (testing)

FTP support

WebDAV and HTTP 1.1 support

Linux distribution back-end for any other customizations

Open source provides you the power to modify and deploy software if you want to do so

Virtualization Solutions from RedHat


Freedom from upgrades

A common annoyance to systems administrators is the major testing and qualification work required when a new version of a base operating system is introduced. The benefits from the new software may be very minor for the majority of applications, but may be required to accomodate one application or one new hardware option. Virtualization provides an escape from this very problem. The existing stack can continue to run as is as a guest inside a virtual machine, while the latest hypervisor happily supports the new hardware, as well as bringing benefits in reliability and performance. The few applications that need the new version of the operating system can execute on another virtual machine with the latest version. This means that upgrades need only to be performed when system administrators want them. They will never again be forced into an unplanned and costly software upgrade.

Security

While a system that is only used for one application can be locked up tightly, many systems today have shared access, and it is important to ensure that privacy is maintained. Virtualization allows each application and data set to be placed in a separate virtual machine. This has many of the advantages of locking up each physical system, without the proliferation of hardware. Because virtualization isolates each guest, each guest is much less susceptible to undesired sharing. Any successful attack is limited to the one guest that is penetrated. Coupled with SELinux and Red Hat Identity Management, it is possible to achieve a high degree of user and data isolation without requiring separate servers for each user.

Development and testing

Software development requires a long cycle with many iterations of coding, debugging, and testing. In the past, the debugging and testing has often required many separate systems. It has been difficult to build up the larger networks and datasets needed for testing. Virtualization provides a number of solutions. Developers can be given individual virtual machines that they are able to start and stop without impacting each other. No longer do developers need individual physical machines. This allows for much better debugging of code, including kernel code. Because virtual machines can be easily and quickly started, stopped, and modified, it is possible to automate a large series of regression testing. Scripts can provision different versions of applications and operating systems, run known datasets against them, and plot and report the results. If a system dies, the script can detect this, as only the guest would have crashed, not the base DOM 0. When large collections of systems need to be used, multiple networked guests can be brought up and can simulate a large physical network with a small number of physical servers. This can allow scalability testing that is rarely done today. In fact, during the off hours, the extra cycles in unused production machines can also be used for testing in a safe manner because of the security and firewalling of each guest.

Live migration

Live Migration allows the migration of para-virtualized guests on Red Hat Enterprise Linux Version 5 from one physical server to another over the network. When the guest is commanded to move (by a program or by a system administrator using standard Red Hat enterprise Linux management tools) the hypervisor on the "from" system works with the hypervisor on the "to" system to prepare enough memory to hold the migrating guest. Memory is then copied over the network until only "hot" memory is left. Because the guest on the "from" machine is still running and servicing guests we define "hot" memory as that memory actively in use. Then the "from" hypervisor pauses the guest and copies the remaining hot memory. After this, the hypervisor on the "to" machine gets the guest running. Since all network and I/O connections are maintained in the copied memory, all of these connections are persistent, and so with only a brief pause less than 200 ms, the job continues servicing customers. Note that the systems must be on the same subnet for the network socket to persist, and must have common storage for open files to persist. A lock manager is not needed and iSCSI, GNDB, or any SAN will work fine. Live migration is done in for many reasons. A guest becomes so utilized that it is either migrated to a new machine, or other guests on that machine are migrated off to give the busy guest more resources. A system begins warning of soft errors in memory, or over temperature alerts, or other indications of an imminent failure. Guests may be migrated off before they are shut down, and the server can be freed up for maintenance. To prepare for a heavy batch stream as is common with some ERP or finance runs, guests may be migrated from a server to provide capacity. Likewise, guests may be consolidated onto a few machines so that excess capacity can be taken off line to save power.

Failure isolation

A good reason to create a large number of guests and run few functions per each guest is to minimize the domino effect of a crash. Often, if one job on a large SMP system crashes it is likely that every hub on that system will crash or hang. With Red Hat Virtualization, each function can be placed in its own guest. Then if it fails, or if it has a security problem, the failure will not propagate and only that one guest will be impacted. By clever use of migration and HA cluster software it is easy to have backup instances ready to go on other machines in case of a problem with one guest or workload.

My Self


Hai my name is Jakki, i have 6years expertise in the field of system administration. This blog i have created especially for posting the Virtualization technology articles. Friends feel free to contact me at jakganesh@yahoo.com in case of any queries related to virtualization. In future i will be posting new articles on virtualization.

Virtualization

Server virtualization has begun to prove itself with significant benefits for the server farm, users are continuing to struggle with issues with the desktop. In many organizations, the problem is Vista migration, with the hardware requirements it brings, and the management challenges that distributed desktops have plagued IT with for years, these situations has begun to raise serious questions about desktop strategies for the future. And although upgrading entire PC base to Vista-capable hardware makes more investment burden, desktop virtualization option bring an opportunity to address longstanding desktop management problems.

Terminal services and its extensions have been in production longer than either of the other approaches, offering an early hybrid of client/server computing and mainframe time sharing. This approach, known as server-based computing (SBC), allows multi-user applications to run on a central server, which users typically connect to from a thin client. Users connect from thin and fat clients of many types. This approach is most familiar for applications where a user runs the same multi-user application and nothing. SBC offers the highest ratio of users per server.

Xen virtualization technology (Newly acquired by Citrix), it will be in a good position to offer the centralized desktop computing, via a stronger integration of SBC and virtual clients, with central management of users connecting to whichever is best-suited for a particular need. Where Citrix, Xen and VMware fit today and in the future have an opportunity to lead the upcoming space of centralized desktops management technology.

Some organizations that have successfully deployed server virtualization technology has generally been done using either VMware or one of the Xen-based offerings, more and more users expanding the virtualization out to the desktop. VMware calls its desktop virtualization approach as Virtual Desktop Infrastructure (VDI), Citrix dubbed it Dynamic Data Initiative (DDI), IBM and others use the term virtual clients. user can connect to the virtual machine via a software called as connection broker, using a thin client or browser. These client software’s offer a wide range of functionality and are available from a variety of vendors. By using these clients software’s end user can connect to virtual infrastructure & can use the technology.

The future shows that the upcoming technology will be the virtualization technology, weather it may be VMware or Citrix Xen or Virtual Iron Or Microsoft virtualization or some other X.