Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Dell at the 2012 LinuxCon and Linux Plumbers Conference

$
0
0

The following is posted on behalf of Shyam Iyer, an engineer on Dell's Linux Engineering team

Dell was in attendance in San Diego with the Linux community to talk, collaborate and discuss solutions that help Linux Customers. I did a talk at LinuxCon 2012, San Diego on Thursday, Aug 30 titled "Towards optimizing Network utilization and deployment in Virtualized environments - Shyam Iyer, Dell" (a copy of the presentation can be found here).  The attendees were mostly sysadmins and network planners for their respective companies, and there was great knowledge sharing among attendees discussing their deployments, with almost fifty percent of the attendees saying they are virtualized. Interestingly that is close to what gartner and other research reports suggest. I talked about the need for architectures that efficiently utilize the network.  The key drivers are: 

  • Workload driven vs traditional
  • Orchestration
  • Automation
  • Compute
  • Storage
  • Network

These are well described in our Networking point of view and I think form the core of our involvement in solving customer pain points. With this premise I described a set of use cases where common Linux network deployments for virtualized servers could be optimized and where performance/management trade offs could be achieved. Some optimizations integrate open-vswitch based SDN methodology with traditional Linux networking.

 I also did a developer presentation at the Linux Plumbers Conference 2012(co-located) titled "Harmonizing Multiqueue, Vmdq, virtio-net, macvtap with open-vswitch" on Friday, Aug 31 to discuss low-level implementation details with Linux network developers.

 Overall the conference had a decent mix of sysadmins, developers and operations attendees. CloudOpen 2012 being co-located added to the festive spirit among conference attendees.


Dell PowerEdge Server Part Replacement just got easier

$
0
0

Replacing failed parts on servers is a mundane activity for an IT admin. Replacing a failed part with a new part is straight forward, however, matching the firmware level and configuration of the failed part with the new one can be tedious and complex when performed manually. We all know manual processes are error prone and could lead to more downtime on the server.

What if you didn’t have to worry about it and the server did it automatically?

The “Part Replacement” feature offered through “iDRAC7 with Lifecycle Controller” makes an IT admin or a tech’s life easy and seamless when replacing parts / devices (RAID controller, NIC/CNA cards, and Power supplies) in the server in the event of a failure.

iDRAC7 with Lifecycle Controller is the second generation of embedded server management that comes with the 12th generation of Dell PowerEdge Servers.

Part replacement is supported for the following Dell supported devices in the system.

  • Network Interface Cards (NIC/CNAs)
  • Storage Adapters (RAID Controllers)
  • Power Supply Units (PSUs)

So you might ask just how easy it is.  Here's a summary of steps in replacing a part:

1. Enable the feature “Configuration update”

2. Choose the “Firmware update” type.

The above features can be enabled in the following methods

  • Using the Lifecycle Controller Platform (via F10 on POST)
  • Using the Wsman based methods (Remote Enablement)    

3. Replace the part

4. Observe that the firmware and configuration are updated based on the options chosen.

The above mentioned steps can be scripted using tools already available in the OS such as winrm or client libraries such as from the opensource OpenWsman project.

This blog is just a summary. If you want to know more, read all about it in the Dell published whitepaper on this feature.

See the full article in this whitepaper:

Lifecycle Controller Part Replacement

More details on the product "Lifecycle controller"  can be found at Dell Tech Center.

Managing Converged Network Adapters on PowerEdge Servers

$
0
0

The expanded feature set and capabilities of Converged Network Adapters (CNAs) have been taking the networking world by storm in recent years. Dell PowerEdge servers are perfectly suited to leverage the full potential of CNAs thanks to the Dell iDRAC7 with Lifecycle Controller that provides extensive manageability of these controllers both from a 1:1 perspective as well as remotely, allowing configuration and maintenance of hundreds of servers in parallel.

We recently published a whitepaper introducing the various capabilities of these CNAs. You will find descriptions of the following primary capabilities:

  1. Traditional Network Interface Controller (NIC) capabilities
  2. Internet Small Computer System Interface over Ethernet (iSCSI) capabilities
  3. Fibre Channel over Ethernet (FCoE) capabilities

You will further be introduced to various concepts that help understand how these capabilities can be configured to best meet the needs of the modern data center:

  1. Personalities
  2. Bandwidth partitioning
  3. Boot versus Offload
  4. I/O Identity 

You will also find common workflow scenarios to obtain inventory as well as to configure the CNAs remotely using the Dell Lifecycle Controller API exposed by the iDRAC7:

  1. Inventory
  2. Partitioning enablement
  3. Personality enablement
  4. Bandwidth allocation
  5. I/O Identity
The concepts discussed in this whitepaper will provide you with a strong foundation to take full advantage of the power and versatility of CNAs. The scenarios will further show you just how easy it is to inventory and mass configure such devices on Dell PowerEdge servers from a remote location of your choice.

  iDRAC with Lifecycle Controller Technical Learning Series.

CiRBA Predictive Analytics Software for Capacity Control in the Cloud

$
0
0

At VMword 2012, I spent time with the Dell Cloud Consulting team as part of their newly available services for virtualization and cloud solutions. A significant component of that new service is the CiRBA analytics software providing a graphical mapping of an existing or planned virtualized cloud system.

I shot some video of the solution as demonstrated by the CEO,Gerry Smith.

(Please visit the site to view this video)

CiRBA provides additional demonstration videos on their site: http://www.cirba.com/demo/index.html.

So Say SMEs in Virtualization and Cloud: Episode 39 Dell | VMware - Hope, Competition & Winning

The joy of zero-touch, plug and play server deployment with Auto-Discovery

$
0
0

The Auto-Discovery feature of the Integrated Dell Remote Access Controller (iDRAC) enables you to remotely provision your Dell PowerEdge Servers out-of-the-box by simply racking them, connecting them, & applying power, in your Auto Discovery enabled network. Auto-Discovery has been baked into 11th and 12th generation Dell PowerEdge Servers since 2009, and now configuring your network for Auto-Discovery enabled servers has never been simpler. First you need a provisioning server (i.e. Dell Lifecycle Controller Integration (DLCI) or Dell Management Plug-In for VMware vCenter) then follow the instructions in Auto-Discovery Network Setup whitepaper for setting up your DHCP or DNS server so that newly powered on iDRAC will automatically discover your provisioning server, and lastly order your servers with Auto Discovery enabled out of the factory.

The Auto-Discovery Network Setup whitepaper has been updated and expanded with everything you need to know:

  • Full technical description of the Auto-Discovery feature
  • DHCP and DNS configuration options for both Windows and Linux environments
  • Details on enhancements to Auto-Discovery since its initial release:
    • Manually setting the provisioning server addresses
    • Method to put a server back in factory default Auto-Discovery state (see the Re-Initiate Auto-Discovery whitepaper).
    • Using custom certificates
    • Monitoring auto-discovery status on the server LCD
    • Requesting iDRAC IP address change notifications
  • List of supported provisioning servers
  • Detailed description of Auto-Discovery security and best practices
  • Troubleshooting and corrective actions with screenshots
  • Advanced Configuration Settings (like static IP environments)
  • Details on SOAP message sent between iDRAC and provisioning server

See the full article in this whitepaper:
Auto-Discovery Network Setup Specification

More details on the product "Lifecycle controller" can be found at Dell Tech Center.

UEFI booting Ubuntu 12.04 on PowerEdge 12G servers

$
0
0

The version of grub2 (1.99) in Ubuntu 12.04 has a bug that results in grub2 failing to boot on certain systems when they boot in UEFI mode. After investigating and working with the grub maintainers, we isolated it to a behavior of the BIOS in PowerEdge 12G systems, although one of our BIOS engineers also found that the UEFI reference implementation from Intel (TianoCore) exhibits the same behavior, albeit on a smaller scale. Patches now exist for both grub 1.99 and grub 2.00, the later of which will be available in Ubuntu 12.10. Unfortunately, the grub2 packages in Ubuntu 12.04 have not yet been patched to fix this bug, so installing Ubuntu 12.04 on a Dell system in UEFI mode requires some manual workarounds to install the system and then later install a grub 2.00 backport from Quantal.

We have asked for an updated grub2 to be included in Ubuntu 12.04, including the installer. Until 12.04 is updated, if you need to use Ubuntu 12.04 in UEFI mode on your 12G system and can use BIOS mode during installation, you can follow this summary of how to install from BIOS mode and later switch to UEFI mode.

Please note that this is only meant for advanced users and is not supported by Dell. It is also possible to modify the install media for Ubuntu to use a patched grub2 so that the media boots, but those steps are not included in this blog post. A follow-on post by my colleague Stuart Hayes addresses using modified boot media to install directly from UEFI mode.

To install in BIOS mode and then switch to UEFI mode, you can follow this summary of steps. I would like to reiterate that this procedure is only meant for advanced users.

  • Boot the system into the BIOS setup (F2 at boot) and make sure that your system is set to boot in BIOS mode.
  • Setup the boot disk partitions. You will probably want to switch to a console in the installer and use parted to do this:
    • Have /boot/efi as the first partition (e.g. /dev/sda1) with a size of 200MB, formatted as vfat and flagged as bootable.
    • The rest of the disk normal should be laid out as usual (e.g. /dev/sda2 as /boot, and / and swap in an LVM volume on /dev/sda3).
  • Proceed with the install but, before rebooting out of the installer, chroot into the installed system.
    • Install the grub 2.00 packages that, at the time of this writing, are available from Ubuntu Core developer Colin Watson's PPA:

 deb http://ppa.launchpad.net/cjwatson/grub/ubuntu precise main 
 deb-src http://ppa.launchpad.net/cjwatson/grub/ubuntu precise main

    • Run grub-mkimage. I used "grub-mkimage -O ${EFI_ARCH}-efi -d . -o grub.efi -p "" part_gpt part_msdos ntfs ntfscomp hfsplus fat ext2 normal chain boot configfile linux multiboot". You should verify that you do not need any other grub2 modules for your system configuration.
    • Run "mkdir /boot/efi/EFI/ubuntu".
    • Copy /usr/lib/grub/x86_64-efi and grub.efi (the latter from grub-mkimage output) to /boot/efi/EFI/ubuntu.
    • Run "update-grub". Copy /boot/grub/grub.cfg to /boot/efi/EFI/ubuntu/.
    • Edit /etc/fstab to have an entry for /boot/efi. You can get the UUID for this partition from "ls -l /dev/disk/by-uuid".
  • Reboot. During POST, press F2 to enter the system setup and switch to UEFI boot mode.
  • Exit the BIOS setup and select F11 during POST to run the UEFI boot manager. In the UEFI boot manager, choose the option to select a boot file. Navigate to your disk and find /boot/efi/EFI/ubuntu. Boot to grub.efi in this directory. If, in your configuration, you are brought to the grub command line, you may need to set the configfile, such as with "set configfile="(hd0,gpt1)/EFI/ubuntu/grub.cfg"" (note the double-quotes after the "=") followed by "boot".
  • Once successfully booted into the install system, uninstall all grub packages, and remove the /boot/efi/EFI directory and its contents. Reinstall the grub 2.00 package from the PPA above.
  • Run grub-install (e.g. "grub-install /dev/sda") and update-grub.
  • You should now be able to boot into your system in UEFI mode.

Follow up: Ubuntu on Dell 12G PowerEdge Servers

$
0
0

My colleague Jordan Hargrave previously blogged about issues with the acpi_pad, sb_edac, i7core_edac, and mei drivers in the kernel used by Ubuntu 12.04. The acpi_pad, sb_edac and i7core_edac issues are now resolved in stable release update (SRU) kernels for Ubuntu 12.04. The mei driver bug will soon be fixed in an SRU kernel for Ubuntu 12.04. More details on where these bugs were fixed are below.

acpi_pad: This bug is fixed in 3.2.0-28.44 (Ubuntu 12.04 SRU kernel) and is tracked in Launchpad bug #1035216. The patch was written by Stuart Hayes of Dell Linux Engineering (https://patchwork.kernel.org/patch/1133941/).

sb_edac and i7core_edac: Fixed in 3.2.0-28.44 (Ubuntu 12.04 SRU kernel) and tracked in Launchpad bug #1007061. The patch was written by Chen Gong of Intel (https://lkml.org/lkml/2012/5/8/62).

mei: This bug is fixed in precise-proposed  and soon with be provided an Ubuntu 12.04 SRU kernel (3.2.0-31.50). It is tracked in Launchpad bug #1041164. The patch was written by Tomas Winkler of Intel (https://patchwork.kernel.org/patch/1278631/).

All these fixes will be available in Ubuntu 12.10 when it is released.


UEFI booting Ubuntu 12.04 on PowerEdge 12G servers, Part 2

$
0
0

Earlier today, Jared blogged about a grub2 bug that can cause Ubuntu 12.04 to fail to boot on certain systems (including some of Dell’s 12G servers).  He talked about how to install Ubuntu with the system in BIOS mode, and then change it to UEFI mode after installing a newer version of grub2.

I’d like to mention an another way to get around this bug and install Ubuntu with the system in UEFI mode, as normal, by simply using the older, “legacy” GRUB on a USB key to boot into the Ubuntu installer.  Note that this method, too, is not officially supported.

To install Ubuntu Server 12.04 with this workaround, follow these steps:

  • Download the free, publicly available CentOS 6.3, x86_64, “minimal” .iso image.  (This CD image contains the “legacy” GRUB bootloader that we’ll use to boot Ubuntu.)
  • Acquire a USB flash drive with a FAT filesystem (or containing a partition with a FAT filesystem).  Many USB flash drives come with an already-formatted FAT filesystem--otherwise use “parted” or “fdisk” and “mkfs.vfat” tools in linux to create a partition and format it with a FAT filesystem.
  • On the USB flash drive, create a directory /EFI/BOOT.
  • Mount the CentOS image (“mount –oloop CentOS-6.3-x86_64-minimal.iso /mnt”, for example), and copy the files below from the CentOS image to the /EFI/BOOT directory on the USB flash drive.
    • EFI/BOOT/BOOTX64.efi
    • EFI/BOOT/BOOTX64.conf
    • EFI/BOOT/splash.xpm.gz
  • Mount the Ubuntu 12.04 Server CD, and copy the files below from the Ubuntu disk to the /EFI/BOOT directory on the USB flash drive:
    • install/vmlinuz
    • install/initrd.gz
  • Download the five new grub2-2.00 packages listed below, and put them on your USB flash drive.  At the time of this writing, these are available from Ubuntu Core developer Colin Watson's PPA (http://ppa.launchpad.net/cjwatson/grub/ubuntu/pool/main/g/grub2/).  You’ll want the latest version for the version of Ubuntu you are using—for example, grub-common_2.00-2ubuntu1~ppa2~precise_amd64.deb.
    • grub-common
    • grub2-common
    • grub-efi
    • grub-efi-amd64
    • grub-efi-amd64-bin
  • Next, edit the file EFI/BOOT/BOOTX64.conf on the USB flash drive.  Remove all of the lines below the word “hiddenmenu”, and add the following three lines to the end of the file:

    title Ubuntu 12.04

                kernel /EFI/BOOT/vmlinuz

                initrd /EFI/BOOT/initrd.gz

  • Put the USB flash drive and the Ubuntu CD into the system on which you want to install Ubuntu, and boot to the USB flash drive (hit “F11” during boot on Dell’s 12G servers to get the UEFI boot menu).  The USB flash drive will boot up, and you can install in UEFI mode as normal from the CD.
  • IMPORTANT:  After Ubuntu has been installed, but before the system is rebooted, follow these steps to install the working version of grub2 on your newly-installed system.  (A good point to do this on Ubuntu Server is when it pops up the “Finish the installation” window and asks if your clock is set to UTC.)
    • Go to virtual terminal 2 (hit alt-F2), and chroot into the installed system (“chroot /target”).
    • Mount your USB flash drive (or it’s FAT partition).
    • Install the new grub 2.00 packages that you copied to your USB flash drive earlier (“dpkg -i --force-breaks /mnt/grub2.00/grub*deb”, for example).
    • Type “exit”, then switch back to the install screen, with alt-F1.

Note that the same instructions should work with Ubuntu Desktop 12.04, except that the files to be copied from the Ubuntu desktop CD are /casper/vmlinuz and /casper/initrd.lz (instead of /install/vmlinuz and /install/initrd.gz), and the three lines added to the BOOTX64.conf file should look like this (the kernel parameters were just copied from the “Install Ubuntu” boot entry in the file boot/grub/grub.cfg on the Ubuntu CD):

title Ubuntu 12.04

kernel   /EFI/BOOT/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper only-ubiquity quiet splash

initrd /EFI/BOOT/initrd.lz

 

Dell TechCenter Featured Contributor: Christian Hansen

$
0
0

The Dell TechCenter community is nothing without it's members.  Without you there would be no TechCenter because all of the wiki pages, blog posts, videos, and white papers exist for you to learn more about Dell products and connect with Dell engineers like us.

To show our appreciation to you the reader, from time to time we like to highlight members of the community in a feature called Dell TechCenter Featured Contributors.  It's our hope that through these posts, Dell TechCenter members and IT professionals around the world can get to know each other better.  

Today, for his outstanding work in the Dell TechCenter forums we would like to highlight community member Christian Hansen also known as DarkingDK.  Recently he helped many other community members with questions about Dell EqualLogic. In addition, he has participated in our weekly Tuesday Tech Chats on Dell TechCenter, including the September 4th chat about Windows Server 2012.  Not long ago, Christian answered a some questions about himself for this Featured Contributor post:

 

What is your name?

My name is Christian Hansen, I am 34 years old, and live in Copenhagen, Denmark

 

What is your role and company?

I work as an IT Engineer for a large Danish Nonprofit manager of housing associations; basically we rent out apartments in the greater Copenhagen area. My main areas of expertise are Citrix, Exchange, Storage and servers. That includes planning, buying and maintaining the datacenter.

 

You’ve worked in IT for 15 years - what do you like about the industry and what have you learned during this time?

I like the ever changing ways IT is implemented, I never get to sit still for too long and just maintain, there is always a new project on the horizon and new technologies to learn, providing a company with the right solution fitting their needs has changed a lot since I first started.. Out of the box solutions simply does not exist the same way it used to do. I’ve learned a great deal over the years, the key being that one-size fits all, is not really how it works out in real life.

 

What are the biggest IT challenges you encounter on a day to day basis?

"Debugging" complex environments tend to be hard on my limited time , finding performance issues in layered environments like virtualized servers, provide a greater challenge. Another is planning ahead. Estimating what a company wants 3-5 years ahead is hard; doing it on sometimes tight budgets makes it even harder.

 

What content on of Dell TechCenter do you find helpful to you?

I really love both the Wiki's and the forum, the wikis provide me with what dell recommends for an installation, and the forums provide me with the caveats that might be out there. I also like the possibility of getting contact with engineers and product teams in the techchats.

 

How did you first discover Dell TechCenter and why do you participate in the forums?

I used to lurk around Dell forums before Dell TechCenter was invented, and decided to join after it had been up for a while. I participate on the forum mostly because I might be able to provide with knowledge that can help someone with a quick solution to a sometimes really simple problem. It’s always easier when you know.

 

What areas of Dell TechCenter do you visit the most often?

The Wikis and the forum, mainly Equallogic related but also the server forum are really helpful.

 


Thank you all for reading, and thanks for continuing to visit the site and for contributing to Dell TechCenter!

 


Power Protection and Management in the Data Center

$
0
0

This post was written by Joy Ruff

As server virtualization adoption continues to increase in data centers of all sizes, the need to protect those servers from power interruptions becomes paramount. Bringing a clean power stream is just the beginning; just as important, and more complex, is ensuring access to each virtual machine and having the capability to manage them all if power is lost.

You can select a Dell Uninterruptible Power Supply (UPS) to help manage these concerns within your facility. Dell offers both tower and rack-mount models, ranging from 500W to 5600W of IT load.  Learn more about the Dell UPS portfolio by watching this video. The optional Network Management Card (NMC) provides secure web browser access to the UPS for control and monitoring. You can even monitor the temperature, humidity and the status of two dry contacts using the optional Environmental Monitoring Probe (EMP).

The Dell UPS includes two management software programs, which support an unlimited quantity of servers with no additional fee, to handle virtualized environments:

Dell Multi-UPS Management Console (MUMC)

Designed for multi-host server environments, Dell’s console facilitates easy and versatile monitoring and management across the network from a single interface. MUMC also provides agentless integration into multi-host managers.

Dell UPS Local Node Manager (ULNM)

Created for single-host environments, Dell’s node manager provides graceful automatic shutdown of any UPS, load segment or connected device during prolonged power disruptions, preventing data loss and saving work-in-progress.

With Dell UPS management software:

  • There’s no need to install the UPS software on each VM.
  • Power profiles are automatically synchronized with the virtualized network infrastructure.
  • The Dell UPS agent can gracefully shut down virtualized servers on demand or during outages.
  • During a power failure, VMs can automatically be migrated to a safe host.

Find out more about the Dell UPS management software, and how to handle power for virtualized environments.

If your deployment includes the Dell PowerEdge M1000e chassis filled with blade servers, you may have additional concerns about delivering and managing power to this dense configuration. You can learn about options for Dell UPSs and power distribution units (PDUs) in this white paper.

You can select the right UPS or PDU for any type of IT system configuration by visiting dellups.com and entering your key requirements in the online tools. For more information on Dell power products, go to dell.com/poweredge/rack.

 

Dell PowerEdge C8000 and HPC – Flexibility Rules!

$
0
0

This post was written by Dr. Jeff Layton, Ph.D. | Dell HPC Enterprise Technologist 

 

The new PowerEdge™ C8000 chassis  (announced today) is the next evolution of the shared infrastructure server from Dell. It builds on the very successful PowerEdge C6100 4-in-2U server to create a more flexible and very dense server platform that works well in HPC.

A shared infrastructure server is like a cross between a blade server and a stand along rack mount server. As the name implies, the servers share infrastructure, typically power, cooling, and a chassis that houses the servers. But it stops there and doesn’t add the shared networking and management that blades typically have. The goal is to leverage the desirable infrastructure aspects of blades without having to develop networking and management hardware that can increase the cost the system.

The PowerEdge C8000 is really a family of servers that fit into the C8000 chassis. Figure 1 below shows a C8000 chassis with several different types of servers (also called “sleds” or nodes).

Figure 1 – C8000 chassis with several types of servers

 

The C8000 chassis itself is a 4U high chassis that is 813mm long that accommodate up to 10 “sleds.” Typically the two middle sled slots are occupied by two redundant power supplies. In Figure 1 these have white handles. Each power supply sled has two 1400 W power supplies so that up to 5600 W can be powered in a non-redundant manner (4x 1400 W). Or with full 2+2 redundancy across power supply sleds you can power up to 2800 W.

The sleds that fit into the C8000 currently are of two varieties. The first is a single-wide sled which allows up to 10 sleds to be put into the C8000 (typically two are single-width power supply sleds). There second is a double-wide sled which allows up to 5 sleds to be put into the C8000 (but again with the two center sleds occupied by power supplies, this means that you can put 4 double wide sleds into the chassis). The power supply sleds plug into a power backplane that provides power to the rest of the serves (part of the shared infrastructure design). The cooling is the classic front to back design with the fans being in the rear of the chassis.

One of the features of the C8000 family is that all cabling but the power cabling is done from the front. No longer will you have to go to the back of the chassis to disconnect a server to replace a component. Now you can do all of that from the front in the cold-aisle. That means you no longer need to use Chapstick to prevent your lips from cracking, while working on a chassis.

As I mentioned previously, the C8000 is the chassis itself, but there are a family of sled solutions that fit into the chassis. On the far left in Figure 1 is a single-width C8220 compute server which is shown below in Figure 2.

Figure 2 – C8220 single-width server sled

 

It is two-socket Intel® Xeon® E5-2600 server. It has 16 DIMM slots (8 per socket) for up to 256 GBs of memory per server. It also has room for two SATA 2.5” drives in the back of the sled. There is also a x8 PCI Express Gen3 customer mezzanine slot that can accommodate a variety of cards, but particularly for HPC, this can be a FDR InfiniBand (IB) card or a 10GigE or 40GigE NIC. In addition, there is a x16 PCI Express Gen3 slot that can accommodate low-profile, half height cards such as a dual-port Mellanox ConnectX-3 dual-port card or an Intel dual-port 10GigE NIC.

The C8220 also contains an embedded BMC (Baseboard Management Controller) that is IPMI 2.0 compliant. It is also Intel Node Manager 2.0 compliant. There is a single 10/100 Mbps RJ-45 connector on each C8220 sled. 

Remember that with redundant power supply sleds, the C8000 can hold up to 8 C8220 servers.  This gives the same density as the highly regarded Dell PowerEdge C6100 4-in-2U shared infrastructure server. However, the C8000 leverages a larger scale infrastructure simplifying the construction of HPC systems because there are fewer chassis and fewer power supplies which al reduces the number of components.

Also available is the C8220X server that builds on the C8220 server sled by adding two GPUs. In Figure 1, it is the third sled from the left or second from the right. Figure 3 below shows the sled (note that this figure shows the sled upside down so you can better see the sled).

  

Figure 3 – C8220X Compute/GPU sled

 

In Figure 3 is the compute server which is a standard C8220 sled. On the left in Figure 3 is the GPU potion of the sled which house two GPUs. The GPUs are connected to the server via two dedicated x16 PCI Express Gen3 connectors so that each GPU has full bandwidth to the server. The compute sled on the right connects to the GPU portion on the left via special cables which gives the design more freedom and density.

Currently, the C8220X ships with the NVIDIA® Tesla™ M2090 GPU computing modules. Future GPUs and other accelerators can be accommodated by the C8220X creating a very flexible accelerated compute server in a very, very compact form factor.

Recall that with redundant power supplies in the C8000, it can accommodate up to 4 double wide servers such as the C8220X. This means that in 4U of vertical rack space you can have four two-socket servers each with 2 GPUs with a full x16 PCI Express Gen3 connection to the server for a total of 8 GPUs and 8 sockets in 4U of space.

At launch there is also a storage sled called the C8000XD. In Figure 1 the storage sled is on the far right of the chassis with a handle sticking out of the front of the sled. Figure 4 below is a view of the C8000XD. 

Figure 4 – C8000XD storage sled

The sled only contains disks and no compute elements. You can think of it as a JBOD (Just a Bunch of Disks) sled that connects to a compute sled for either direct disk access or RAID. As you can see in Figure 4, the disks mount vertically into the sled making connection for power and data at the bottom of the sled. The C8000XD can take up to 12x 2.5” or 3.5” SAS/SATA drives or 24x 2.5” SSD drives. All of the drive slots are hot-plug capable.

The launch configuration of the C8000XD can daisy-chain up to four C8000XD sleds giving you up to 48x 2.5” or 3.5” drives that are connected to a single compute sled. All the connections are done through the front of the sled keeping the cold aisle accessibility of the sleds.

 

Summary of the C8000

September 19h, 2012 is just the initial launch of the C8000. But it’s not a “point” product but rather the beginning of a family of evolving products. You will see new features and components being released over time. 

At launch there will be the 4 different types of sleds in the C8000 family. The C8220 single-width compute sled, the C8220X double-wide accelerator sled, the C8000XD storage sled, the power supply sled. One of the biggest strengths of the C8000 family is flexibility. You can use any of the sleds in a single chassis, or within a rack or within a system. This means you can address a wide variety of workloads or needs within the same system.

For example, if you need a very dense pure compute platform for HPC workloads, then using the C8220 results in a very dense system with up to 80 two-socket sleds in a 42U rack.

If you need servers with large amounts of local storage for workloads such as FEM (Finite Element Methods) applications, or even Hadoop, then the C8000X can be used for large drive counts for compute servers.

If you need accelerators along with traditional CPUs, then you can use C8220X sleds. This gives a two socket server along with two GPUs that each has a dedicated x16 PCI Express Gen3 slot.

But the ways that HPC systems are being used is rapidly changing. They will always have the traditional workload of large core applications that utilize MPI (Message Passing Interface). These workloads also include large amounts of data both as input and output. But additional types of workloads such as bioinformatics - where the files are very small, but there lots of them that are used with a large number of cores - which can also include Hadoop based tools, are starting to be included in HPC systems. A second “new” workload that is also appearing on HPC systems results from researchers using tools such as Matlab, R, or Python for a wide variety of tasks including parameter studies that require thousands of cores.

These new changes to traditional HPC are reflected in a record setting system that uses the C8000 family.

 

Get out of the Way – it’s a Stampede!

The first system to be deployed using the C8000 family is one that you may have heard of - Stampede. The University of Texas Advanced Computing Center is deploying Stampede, one of the fastest systems in the world, achieving approximately 10 PFLOPS (PetaFLOPS) when realized for production work.

Stampede is almost entirely built using the Dell PowerEdge C8000 family. The primary compute sled is the C8220X which also contains Intel® Xeon Phi™ coprocessors and uses a Mellanox FDR InfiniBand fabric capable of 56 Gbps. Each compute sled has 32 GBs of memory and two Intel eight-core Xeon processors. There are also two power sleds for redundant power within each 4U chassis. In total there is 272 TB (Terabytes) of memory in Stampede.

Stampede also uses storage sleds for their primary storage for Stampede, using Lustre as the file system. Altogether there is 14PB of spinning disk storage.

There are also 128 NVIDIA® Tesla™ K20 GPUs for remote visualization,16 Dell servers with 1TB of memory each and two GPUs for large data analysis.

Figure 5 below is an image from Stampede.

 

 

Figure 5 – View down one row of Stampede

Altogether there are approximately 200 racks in Stampede. The non-accelerated portion of the system is capable of 2 PFLOPS which is double the performance of the largest system in the NSD XSEDE program (National Science Foundation Extreme Science and Engineering Discovery Environment). When the accelerators are included in the performance, then Stampede should finish with around 10 PFLOPS.

Power and Performance - the Dell SAP HANA Appliance

$
0
0

This post was written by Nicholas Chavez (Business Intelligence & Analytic Solutions SAP HANA Consulting Lead), Mike Lampa (Director, Business Intelligence & Analytic Solutions) and Kay Somers (Dell Global Alliances)

 

In 1930 eccentric millionaire Howard Hughes released a World War I fighter-plane action film called "Hells Angels".  His film ran over-budget at $3.8 million - earning it the dubious title of "the most expensive film ever".

70 pilots were retained to fly in the film (67 survived) and 249 feet of film was shot for every foot of film that made it into the final cut.  Why waste so much film?  Well, when watching the daily footage Mr. Hughes couldn't tell how fast the planes were flying.  He spent months waiting for clouds to appear near the set so he could film the planes flying with the clouds as a backdrop.

Mr. Hughes needed the clouds to contextualize the speed of the planes.  It worked.  It gave the audience a relative measure of speed and the film went on to be nominated for an Oscar.

Why all this talk about old movies, clouds and planes?  Because like the planes, SAP HANA is FAST.  Before we get into the nuts and bolts, here are the near-mythological speeds reported by SAP:

  • 2 million records browsed per millisecond per core
  • 10 million complex aggregations calculated per core

Those numbers likely don’t mean much without context, so here are some that will help you "get" just how fast HANA really is. These are our "contextual clouds" ala Mr. Hughes:

  • For many years clock speed for a server was measured by CPU
  • Later, hardware companies deployed servers with multiple CPUs
  • Technology improved and multiple cores could be placed on a single CPU
  • Later, hardware companies deployed servers with multiple cores on multiple CPUs

SAP HANA takes advantage of this hardware processing power by pairing it with SAP HANA database compression software, which resulted in SAP reporting the following ultimate SAP HANA configuration:

  • 10 cores per CPU
  • 8 CPUs per board
  • 4 boards/blades per server
  • 4 terabytes of RAM

Multiply this out and you get 320 cores processing 2 million records per millisecond per core which turns out to be a staggering six-hundred-and-forty-billion (640,000,000,000) records-per-second speed limit.

Now obviously, there is more to SAP HANA than just a whole lot of horsepower, after all it is a business analytics tool.  So other than the macro-description of the hardware and software working together for blazing speed, just what makes SAP HANA so amazing?

The Power of In Memory Analytics

SAP HANA delivers power and performance using In Memory Analytics. This technology parses and rationalizes large sets of data and delivers intelligent reports to drive business decision-making.

In Memory analytics solutions query and report data much like standard Relational Database Management Systems (RDBMS). RDMS solutions typically query data stored on disks that often require “spin-up”, “seek” and other physical hardware articulation time. In contrast, in memory analytics solutions minimize the need for physical HDDs in favor of storing the vast majority of relevant data to be queried in the Random Access Memory (RAM) chips within the server.  The speed advantages offered by this RAM storage system are further accelerated by the use of multi-core CPUs, and multiple CPUs per board, and multiple boards per server appliance.

Dell In-Memory Analytics software implementations have shown Business Analytics Reporting performance increases that have taken critical reports that historically took days to process down to a processing time of several minutes.

Blog Series Preview

In this SAP HANA blog series, we’re going to break down the power of SAP HANA into five key components:

  1. Architecture – Building a framework for storing, parsing and accelerating reporting and analytics.
  2. Data Provisioning – Identifying, transferring and storing source data for modeling and reporting.
  3. Modeling – Manipulating data for optimal analytics and performance.
  4. Reporting – Extracting data and translating it into meaningful information to drive business decision-making.
  5. User Management – Managing access and activities to protect the sanctity and privacy of data.

Over the next few weeks, we will be exploring how each of these areas contributes to the speedy, robust and powerful SAP HANA in memory analytics solution.

Learn more about Dell and SAP HANA solutions at www.dell.com/hana.

 

 

Waratek Founder and CEO John Matthew Holt on Their Java Cloud Solution

$
0
0

Waratek the Cloud VM for Java

There are potentially serious incompatibilities and costs around running Java in the Cloud that have hampered organizations from re-engineering their businesses to take full advantage of new IT delivery models. Having standardized on Java, they face unexpected challenges as they look to the future. The Waratek Cloud VM for Java is a breakthrough product that plugs this strategy gap and for the first time allows Java the benefits of Cloud delivery with true multitenant operation, granular elasticity, resource prioritization and increased density.  As it is binary compatible these benefits are available with no coding required.  

The much-hyped promise of turning IT into a service, delivered back to the business in a way that can be scaled and metered, becomes a reality.  Whether it’s deployed within the Enterprise or as a Service - Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) or Software-as-Service (SaaS), the Waratek Cloud VM for Java is the key to unlocking on-demand features that are transforming the way IT is consumed.

(Please visit the site to view this video)

Time                      Question
0.0                          Introduction
0.46                        History and Background on Waratek
1.39                        How Old is the Company?
2.27                        What are the Problems Waratek is Solving for Java?
5.22                        What is the Name of the Solution?
6.00                        What are the Benefits of this Solution?
7.40                        Where Can I Get More Information on Waratek?
8.39                        Any Upcoming Events?
9.26                        Visiting Dublin – Stop by for a Guinness

BIO: John Matthew Holt, Founder and CTO


John Matthew Holt is the inventive inspiration and technical driving force behind Waratek’s groundbreaking research and development into distributed computing and virtualization technologies, which has led to the granting of over 50 patents to date with many more pending.
 
As CTO, John Matthew leads a multinational team of expert computer engineers on a journey that has resulted in the creation of Waratek, the Cloud VM for Java. 

John Matthew has a long-standing passion for exploring contrarian frontiers of metacircular abstract machine interpreters and dynamic recompilation frameworks. John Matthew leads the design and development of Waratek's virtualization products and technologies.

iDRAC Lifecycle Logs – PowerEdge servers lifecycle at your fingertips

$
0
0

This post is co-written by Satyajit Desai and Raja Tamilarasan

Are you an IT Admin? Do you manage your organization’s Data Center? If yes, continue reading for some great news that’s going to save you time. As a system admin, you must have come across a situation where you have a machine that’s down and a teammate who says “I don’t know what happened but the machine doesn’t work any longer”. You would then start playing twenty questions  to figure out what happened to the system . Instead, what if the machine could tell you exactly what happened before the failure?

Introduction to Lifecycle Logs
Starting with the 11th generation of Dell PowerEdge servers, the iDRAC with Lifecycle Controller provides a feature called Lifecycle Logs which records every action performed on the server. This information can be viewed locally or remotely through various GUI based consoles as well as iDRAC supported command line interfaces.
To access the Lifecycle Logs on a 12th generation PowerEdge server, browse to the iDRAC web GUI and login. In the Tree view, click on Server, select the Logs tab then click on Lifecycle Log. From here, you can immediately see the most recent server events. If you like, you can find particular entries by using the Log Filter feature.

Below are snapshots of the Lifecycle Log entries:

 

 

The various types of information available in the Lifecycle Log are listed below.
1. Information about replaced parts.
2. Information about firmware changes due to an upgrade or downgrade
3. Information about failed parts.
4. Temperature warnings
5. Configuration changes on the system hardware components.

As seen in  the snapshots above, the Lifecycle logs also provide detailed timestamps, severity, recommended actions and some other technical information that could come very handy for tracking or alert purposes. To learn more about different types of messages that are logged as part of Lifecycle Logs, refer here.

Dell is listening to the customers and making improvements to its iDRAC and Lifecycle Controller embedded server management portfolio. To learn about some of the other cool features supported by Dell’s iDRAC with Lifecycle Controller solution, refer here.


 


So Say SMEs in Virtualization and Cloud: Episode 40 Dell | VMware - Infrastructure & Ecosystems

Top 10 Reasons to Attend Dell Storage Forum Paris

$
0
0

Hello there blogoverse! It's time for the Top Ten Reasons to Attend Dell Storage Forum Paris blog post! But wait, where have I been all this time? Well, I've been working on all new collateral, tech reports, videos and more goodies for our next version of VMware integration tools! So keep your eyes peeled because we are almost there!

In fact, one of the cool things about attending any of the Dell Storage Forums, whether in the US or overseas, is to see all the cool new integration packages before they are released! There was even a Dell Storage Forum in Australia! I need to see who I need to bribe to get invited to that one next time yeesh!

Ok so enough exclamation marks let's get to talking about why Dell Storage Forum Paris this year is going to be THE storage event to attend overseas.

When is it? Dell Storage Forum Paris is Nov 14-16 in the Marriott Rive Gauche in Paris France. You can find out more information here: http://www.dellstorageforum.com/

What is it? Dell Storage Forum, or DSF for the cool kids, is a conference for channel partners and customers of Dell Storage that comprises of deep dive technical discussions on best practices, what’s new, what's coming up, and basically how to better utilize your Dell Storage. Everything from networking topics to software integration to virtualization and more!

Who's going to be there?  Well me for one, and what other reason do you need? Ok seriously though, this isn't a fluff and sales conference, in fact we don't let those guys in! This conference is designed and led by technical experts from engineering and solutions to bring you deep dive topics on how to better take advantage of your current infrastructure and how to grow and expand your current capabilities.

So now, how do you get your boss to let you go? Well we have a cool letter to your boss located on the website and now I present to you the Top Ten Reasons to Attend Dell Storage Forum Paris Nov 14-16 2012!

#10 - You heard about how great DSF Boston was but weren't able to make it. Well here is your chance! We will be bringing some of the same material as well as new and updated material from the hugely successful DSF Boston to DSF Paris.

#9 - It’s in Paris!  I mean come on how cool is that.  I’ve never been so I’m wicked excited to go.  Plus the celebration event on Thursday is a boat ride and reception on the Siene river and you can even see the Eiffel tower all lit up and sparkly.

#8 - Instructor led training.  You can attend optional early instructor led training on configuration and management of EqualLogic or Compellent storage systems.  You’ve been saying for a while you want more training, so here it is!

#7 - The solutions expo.  Come meet all of our amazing partners and the rest of the Dell Enterprise team in the solutions expo and see how Dell and our partners can help you deliver the tools for whatever challenge you might have.

#6 - Talk to peers and colleagues and trade best practices and tips and tricks.  See what challenges they face and how they have overcome them and implement techniques that you hear about in your own environment.

#5 - The NDA sessions are not to be missed.  This is where we talk about the future of storage technology.  Some really cool stuff and I can’t wait to talk about *BLEEP* or how we are integrating *BLEEP* and some really cool demos and discussions.  Learn about the future of our storage lines and how they will fit into your datacenter.

***Cool picture deleted, come to the NDA sessions!***

#4 - Hands on Lab!!  Not only do you have the opportunity to learn all this great information at DSF but you can actually put these skills to use and learn even more at our hands on lab.  In fact, in Boston we had over 150 hours of EqualLogic hands on labs  completed!  That doesn’t even count Compellent, DX/DR or Powervault.  They are a great way to learn in a lab environment so that you can take these skills back with you.

#3 - Come attend some very technical sessions and panels presented by engineers and subject matter experts.  You’ll learn more about your storage in these sessions than you will all year!  From best practices and networking, to implementation and data protection there are some really great sessions.  Check out the agenda on the DSF webpage and you will see all the great content.  Kind of like “Local and remote protection and recovery for VMware virtual environments using Dell EqualLogic Auto-Snapshot Manager/VMware Edition” and “Essentials for deploying, integrating and scaling VMware vSphere with Dell EqualLogic”.  *hint hint shameless plug for my sessions*

#2 - Come have deep dive technical discussions with engineers and experts.  Come whiteboard your “what if I do this” or “how can I achieve this” and let us guide you on leveraging your equipment to its fullest.

#1 - And the number one reason to attend Dell Storage Forum Paris is the keynotes!!  We have got some amazing things to talk about and you can learn all about the strategic direction of Dell storage from Dell executive team.

So there we have it, the top ten reasons to come to Dell Storage Forum Paris!  Although I could come up with another 20 or so.

Can’t wait to see you so until then keep it virtual!

Au Revoir - See look at me learning French J  -Will

As always: I work for Dell as a Product Solutions Engineer in the Nashua Design Center working on VMware integration with the EqualLogic PS Series SAN products. The views and discussions on this blog are my own and do not necessarily represent my
Employer. The idea of this blog is to educate and entertain and hopefully it will accomplish that.

You can follow me on twitter at @VirtWillU.

Also, check out Dell TechCenter for these videos and other content on EqualLogic

Simplified Remote Server Configuration through iDRAC with Lifecycle Controller

$
0
0

This post was co-written by Satyajit Desai and Raja Tamilarasan

Configuring and monitoring servers is a large part of a Data Center admin’s job, and is fairly straightforward when the server(s) in question have an operating system installed. But have you ever wondered how you would manage  a bare metal server that was freshly racked in the data center? Onboarding a brand new server involves configuring the BIOS, setting up the RAID and deploying an operating system, among a host of other things. These onboarding tasks can become extremely difficult when the server is in a different geographical location than the admin. Dell’s iDRAC with Lifecycle Controller overcomes this remote management challenge and provides complete end-to-end remote configuration capabilities.

Introduction to Remote Configuration

Wouldn’t it be awesome to be able to pull your server out of the box, make the physical connections and from there on, configure it to your desired settings without having to be anywhere near it? Remote Configuration allows you to view and configure all the hardware present on the server, regardless of its location. This includes configuring BIOS, NIC, RAID, Lifecycle Controller and iDRAC settings on a bare metal server.

With the remote configuration capabilities built-in to the Dell PowerEdge server’s iDRAC, it is a simple task to use the industry standard Web Services Management (WSMAN) command line interfaces (CLI) to configure your server remotely.
Not only can you use the  WSMAN CLI, Dell software teams have developed console plugins that integrate with applications like Microsoft System Center Configuration, VMWare vSphere, etc. to provide remote configuration capabilities through these consoles.
Some of the Remote Configuration features currently supported by iDRAC and Lifecycle Controller on  11th and 12th generation Dell PowerEdge servers are listed below:

  1. Configuring and monitoring BIOS, including setting the Boot Order.
  2. Configuring and monitoring the RAID controller(s), including creating Virtual Disks, assigning hot spares, etc.
  3. Configuring and monitoring iDRAC settings.
  4. Configuring and monitoring System settings.
  5. Configuring and monitoring Lifecycle Controller settings.
  6. Deploying the Operating System.

For detailed instructions on how to write scripts to make use of the power of the iDRAC and Lifecycle Controller Remote Configuration feature, please refer to the Dell Tech Center wiki page here

Dell is listening to its customers to make improvements to its iDRAC and Lifecycle Controller embedded server management portfolio. To learn more about some of the other cool features provided by Dell’s iDRAC with Lifecycle Controller, please refer here.

Dell EqualLogic PS-M4110 Blade Array Technical Article and Reference Architecture for Data-Center-In-A-Box

$
0
0

Hello EQL Family,

Dell’s EqualLogic™ product suite continues to drive innovation by delivering all the functionality and enterprise-class features of its traditional rack-based arrays in a new blade chassis form factor product.
The newest member of the Dell EqualLogic PS Series family is the scalable and easy-to-manage Dell EqualLogic PS-M4110 blade array designed for use with the Dell PowerEdge™ M1000e blade chassis product.

The PS-M4110 is targeted at small to medium blade chassis customers demanding shared storage with advanced features at lower cost and with lower space, heating, and electrical requirements than other
storage products on the market. The tight integration of PS-M4110 along with the Dell M-Series chassis and blade servers presents some new and unique use cases for shared storage.

Of particular interest is a use case for a completely converged server, network, and storage infrastructure called data center-in-a-box. In this deployment model, the M1000e, along with integrated servers, networking,
and shared storage, becomes a modular building block for customers wanting to grow their infrastructure – virtual or physical – as they need additional capacity while minimizing the data center impact in terms
of power, cooling, and rack space.

The idea is to converge blade form factor servers, storage and networking hardware inside a single rack mount enclosure or chassis that can be bought, installed and managed as a single system.
This is easier to do than buying, installing and managing the components individually, and makes sense when you're buying lots of the stuff or need a single miniaturised data centre.

The perceived competition is the HP StorageWorks D2200sb storage blade and Dell has commissioned a comparison report (pdf) from Principled Technologies that shows its PS-M41100 blade:

- has 55 per cent fewer storage configuration steps
- supports 48 per cent more users
- supports 42 per cent more users per watt
- has up to 96 per cent more usable capacity

Additionally, the PS-M4110 received a 9.0 overall Excellent grade and Editor's choice from InfoWorld.

Reference architecture overview

The Storage Interoperability Lab conducted tests on a self-contained M1000e
based data center-in-a-box architecture that included testing of multiple different networking components as well as using blade servers from the latest two generations of M-Series blade servers.

Two PS-M4110 Storage Blades, running array firmware version 6.0.1, were installed
in the M1000e Blade Chassis.

The tests were designed to validate the PS-M4110 Storage Blade interoperability with NIC/CNA’s and IO Modules .
The PS-M4110 Storage Blade will provide features/functionality consistent with the PS4000 family and will align with the EQ family values including all inclusive pricing,
ease of setup and management, Enterprise class high performance and 5-9s reliability.

PS-M4110 will follow the PS4000 support model today which allows for support of PS6000 features (additional members, volumes, snapshots, replication target volumes and volume
connections per group) when joined to a PS6000 group outside of the blade chassis. This auto adjustment of array profiles to align to PS6000 family values must be transparent to the customer.

The main benefits of PS-M4110 Blade Array vs competitor’s storage blade solutions are in the ease of use/management and the fact that the PS-M4110 provides a consistent storage
management utility for remote office and data center storage solutions enabling high end features such as snapshots and array based replication.

The PS-M4110 will provide the same level of data and array security provided by racked based PS products

Get the full report and reference architecture HERE:

 Until Next Time,

Guy Westbrook

Twitter:    @GuyAtDell
Email:      guy_westbrook@dell.com
Chatter:  @EQL Tech Enablement

Click the link below to access the Storage Infrastructure and Solutions  (SIS)  Team publications library.
Which has all of our Best Practice White Papers, Reference Architectures, and other EQL documents:

 http://en.community.dell.com/techcenter/storage/w/wiki/2631.storage-infrastructure-and-solutions-team.aspx

Dell Cloud Management Console: Self-Service Cloud Access Is Here

$
0
0

Dell has released our new Cloud Portal (Cloud Management Console) for Dell Cloud Services customers in the United States. This tool allows IT administrators to establish and manage a public cloud infrastructure running on Dell Cloud Services with the following capabilities:

  • Create a new account for Dell Cloud Services
  • Select from various pre-configured bundle packages/subscriptions
  • Establish user access and roles
  • Create new VM images and templates
  • Monitor system and user data in pre-defined reports
  • Upgrade resource needs
  • Account management services including billing
  • Support services

An overview of the new CMC tool provides a quick view of this intuitive tool:

(Please visit the site to view this video)

To access the CMC tool, go to Dell Cloud Management Console.

Viewing all 1001 articles
Browse latest View live




Latest Images