Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

OSCON 2013 – Recap

$
0
0

I was very pleased to meet with Joseph B. George, Barton George and with Rob Hirschfeld during this year’s OSCON. Joseph and I spoke about the announcements Dell made around OpenStack and Hadoop, Barton shared with me the latest news on the Dell Sputnik developer laptop and Rob gave me some valuable insights into an interesting discussion in the OpenStack community: How should the community define what is part of OpenStack project’s core?

Joseph B. George, Director of Cloud & Big Data Solutions at Dell

(Please visit the site to view this video)

The Dell OpenStack-Powered Cloud Solution is now available with new options such as support for OpenStack Grizzly and for Dell Multi-Cloud Manager (formerly Enstratius) as well as extended reference architecture support, including the Dell PowerEdge C8000 shared infrastructure server solution, high density drives, and 10Gb Ethernet connectivity.

The latest update of Crowbar includes open sourced RAID and BIOS configuration capabilities. SUSE has integrated Crowbar functionality as part of SUSE Cloud 1.0 version, SUSE’s OpenStack-based private cloud distribution. SUSE Cloud 2.0 is currently in beta and will be available in the fall.

Last but not least, Dell now offers Dell Cloud Transformation Services to help customers assess, build, operate and run OpenStack cloud environments.

On the Hadoop side of things, the Dell Cloudera Hadoop Solution now supports the newest version of Cloudera Enterprise. Updates allow customers to perform real-time SQL interactive queries and Hadoop-based batch processing, which simplifies the process of querying data in Hadoop environments. In addition to that, Dell has tested and certified the Intel Distribution for Apache Hadoop on Dell PowerEdge servers.

Barton George, Director of Developer Programs at Dell

(Please visit the site to view this video)

Project Sputnik originally started as an internal project and was productized in November 2013. It’s a client-to-cloud platform for developers with three components: the cloud launcher, profile tool, and the Ubuntu-based XPS 13 developer edition laptop. The profile tool is designed to provide access to a library of community-created profiles, such as Ruby and JavaScript on GitHub, and to configure and set up development environments and tool chains. The cloud launcher enables developers to push applications directly from their client to the cloud. Dell just announced a 3 months free trial on the Joyent public cloud for those who purchase the Dell XPS 13 developer edition laptop. 

Rob Hirschfeld, Cloud Architect at Dell

(Please visit the site to view this video)

Rob recently published a series of blog posts on a very interesting (and important) discussion within the OpenStack community: What should be core of the OpenStack project? The conversation currently floats towards a pragmatic approach: a particular feature should become part of OpenStack’s core only when it passes a set of mandatory tests (in addition to at least one reference implementation). The OpenStack User Committee is supposed to engage in this process by providing guidance on which tests are required.

For further details go to Barton George’s and Joseph B. George’s personal blogs.


New Delta Reports available for new Driver CABs on Dell TechCenter

$
0
0

Authored by Sujeet Kumar and Manoj Kumar

We are pleased to announce availability of new content about System Driver CABs on Dell Tech Center called a "Delta Report”.

 

The driver CABs are updated based on the release schedule and in response to customer questions, we are providing a way to see what has changed in the new release of driver CAB with respect to previous driver CAB (does new driver CAB contains all applicable latest drivers?, etc.). The “Delta Report” is the answer to these questions.

“Delta Report” is the reports built by Dell’s engineers and are the basis for finding applicable latest driver candidates for new driver CAB release.

 

“Delta Report” as the name suggests, contain the difference between the applicable latest set of drivers available on Support.dell.com and  the previous driver CAB. Thus enabling customers to understand the changes in the new driver CAB. A “Delta Report” provides the following information:

  • Contents of previous driver CAB.
  • Changes since previous driver CAB.
  • Highlight incremental changes.
  • Applicable driver releases for the new driver CAB.

 

Note: Applicability of a driver for new driver CAB releases are decided on various factors like- which driver supports more devices, is a release INF installable or Operating System Deployment ready, etc.

 

Release of the “Delta Report” will align with the release schedule i.e. all new driver CAB release hence forth will have “Delta Reports” associated on Dell Tech Center site. Any customer can navigate to a “Delta Report” by accessing links associated to a driver pack on the Dell Tech Center  Driver CAB landing/Index page.

 

Quick links for a few published reports are listed below for reference:

 

 

Understanding a “Delta Report”

 

A small part of a “Delta Report” is shown below (Figure-1), Let’s look at the details.

 

Figure-1

 

Key data items in “Delta Report” are:

1. Dell Version of new/latest driver CAB.

2. Release Date of new/latest driver CAB.

3. Download link for new/latest driver CAB.

4. Link for Driver CABs Homepage.

5. Link to “Understanding Delta Report” page.

6. Title of the Delta Report Content.

7. Details about the previous driver CAB(along with Dell version and release date) and Support.dell.com(along with snapshot date), as used for comparison.

8. Table showing comparison and highlighting the changes.

9. Column shows the driver release from the previous driver CAB.

10. Column shows the applicable drivers on Support.dell.com at time of building report.

Other Columns are Architecture(Arch), Category, Device Description, Status of Comparison(status) and Details of Status(Status Description).

 

Let’s look at Status and Status Description for different cases:

  • Status “OK”

“OK” value represents:

a)      Perfect match i.e. Same driver was part of the previous driver CAB and no update for device is required.

b)      Any driver which is part of previous driver packs but not available on Support.dell.com.

 

Examples

Figure-2

 

  • Status “No Match”

“No Match” represents:

a)      Drivers which are new addition to applicable driver for latest/new driver CAB or are available on support.dell.com but was not present in previous driver pack.

 

Examples

Figure-3

 

  • Status “Update”

“Update” represents:

a)      Drivers which are updated to fix issues or provide additional features. Previous driver pack had an older version of driver for the device.

 

Examples

Figure-4

 

Status Description just provides a hint about the possible reason to set a status. All drivers marked as “Ok” and “No Match” are part of latest/new driver CAB. All drivers marked as “Update” will substitute older driver in latest/new driver Cab.

 

Please provide your feedback in the comments section below or continue the conversation in the TechCenter forums.

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #40

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

Dell tool for System Center Configuration Manager SP1 2012 by Marcelo Sinic

Einigen hilfreiche CmdLets zum Umgang mit Storage Spaces in German by Carsten Rachfahl

Deploy Exchange 2013 CU’s by Johan Veldhuis

Virtual Receive Side Scaling (vRSS) In Windows Server 2012 R2 Hyper-V by Didier van Hoye

Azure

Windows Azure SDK 2.1 for .NET released by Toni Pohl

Exchange

Deploy Exchange 2013 CU’s by Johan Veldhuis

Hyper-V

If you have Windows Server 2012, and HyperV, install this now by Nick Whittome

Yes, You Can Run A Hyper-V Cluster On Your JBOD Storage Spaces by Aidan Finn

Always Repair Cluster Name Object (CNO) In Order to Perform Live Migration by Lai Yoong Seng

Virtual Receive Side Scaling (vRSS) In Windows Server 2012 R2 Hyper-V by Didier van Hoye

PowerShell

#PSTip Hiding parameters from tab completion by 

Manage Network Adapters with PowerShell: Configure an Adapter by Jeffery Hicks

Browsing PowerShell Commands using a grid window by Jan Egil Ring

System Center Core

Links zu IaaS mit Windows Server 2012 R2 & System Center 2012 R2 in German by Daniel Neumann 

System Center Avisor

SYSTEM CENTER ADVISOR NOW ADDS MONITORING FOR VIRTUAL MACHINE MANAGER by Thomas Maurer

System Center Configuration Manager

Dell tool for System Center Configuration Manager SP1 2012 by Marcelo Sinic

System Center Dataprotection Manager

Event ID 301–The protection agent upgrade failed because the protection agent is not installed on server by Lai Yoong Seng

System Center Virtual Machine Manager

Hyper-V Bare-Metal Deployment mit SC2012 R2 VMM in German by Daniel Neumann

SQL Server

SQL companyweb Wartung mit Log-Shrink auf einem altem SBS by Toni Pohl

#PSTip Get Active Database Connections of a SQL Database by Ravikanth Chaganti

#PSTip Drop all Active Connections of SQL Database by Ravikanth Chaganti

Windows Intune

Windows Intune: What’s New in Q3 2013 by Damian Flynn

Windows Client 

Microsoft Produkt-Updates im Sommer in German by Martina Grom

Windows Server Core

Build WS2012 R2 Storage Pools On JBOD, Virtual Disks, And CSVs Using PowerShell by Aidan Finn

Create a Basic Scale-Out File Server (SOFS) by Aidan Finn

Comparing The Costs Of WS2012 Storage Spaces With FC/iSCSI SAN by Aidan Finn

Designing a Basic Scale-Out File Server (SOFS) by Aidan Finn

A Bunch Of WS2012 R2 Storage Posts By Microsoft That You Need To Read by Aidan Finn

IaaS Innovations in Windows 2012R2 by Alessandro Cardoso

Networking enhancements in the Windows 2012 R2 based on customer feedbacks by Alessandro Cardoso

Einigen hilfreiche CmdLets zum Umgang mit Storage Spaces in German by Carsten Rachfahl

Tools

Upgrading Your DELL Compellent Storage Center Firmware (Part 1) by Didier van Hoye

[table id=3 /]

Reflection on OSCON 2013

$
0
0

A couple of weeks ago, I was on my way to OSCON, one of the largest open source conferences. I know some may ask, "What is Dell doing there?" Despite Dell's long investment in Linux and open source (my team, Linux Engineering, has been around since the 90's), Dell's role in the Linux and open source communities is often little known. I'd like to give a summary of my experience at this year's OSCON, hoping to highlight some of Dell's role in Linux and open source.

Dell's booth at OSCON this year was manned by the dozen Dellites in attendance in addition to one OS partner from Canonical and another from SUSE. We made a few major announcements at OSCON, although my favorite is that Sputnik buyers are now offered three free months on Joyent's cloud. More details of our announcements are at <http://www.dell.com/learn/us/en/uscorp1/press-releases/2013-07-23-dell-open-source-cloud-big-data>.

Speaking of Joyent, by far my favorite talk was the talk titled "Open Source Systems Performance" by Brendan Gregg of Joyent. Having a background in both Linux and Solaris myself, I had never given DTrace--a dynamic tracing framework--a try, and I was floored both by Brendan's breadth of knowledge and the power of DTrace at solving the kinds of performance problems that otherwise would be elusive with conventional tools. I also left feeling that the people at Joyent are well-versed at solving hard performance problems and bringing better solutions to customers. I am glad that Joyent is one of Dell's public cloud partners and is contributing to Project Sputnik.

As far as Project Sputnik goes, when I arrived in Portland, my first stop was Dell's booth to setup our XPS 13 Developer Edition laptops. We had two to demo at our booth and were giving away another three that week. Here you can see me caught off-guard by Barton George's camera while I'm helping to set up. :) Dell's come a long way since we shipped our first Ubuntu pre-loaded systems in 2007. Many had thought that we stopped, when the truth is that our commitment to Ubuntu has only grown stronger over the years, though lately most of our shipments have been in Asia. In fact, the factory installer for Ubuntu developed by Dell is now used for all Ubuntu factory installs across OEMs. As a Linux zealot since 1998, I am hoping that Project Sputnik is just the beginning of a revival in Linux offerings here in the States.

Of course, I also dropped by the other end of the expo hall to talk with our friends at Red Hat, with whom we've worked for many years. I also got a chance to talk with the folks at Citrix about XenServer on PowerEdge. And, since I've been exploring the gamut of virtualization tools lately, Solomon Hykes' talk about Docker was informative and intriguing. I had a chance to meet many involved in OpenStack, both to celebrate OpenStack's third birthday and to join Rob Hirschfeld (OpenStack Foundation Board member and fellow Dellite) and others in discussing what's core to OpenStack. I'm still new to OpenStack but hope personally to become more involved.

Roaming the expo hall, one moment of delight was to see one particular startup's booth. A year ago, I had helped a team at Dell who was working with a customer on their all-SSD cloud storage platform. A year later, I was very excited to see some very familiar hardware on their brochures. They now sell a product running on some of my favorite hardware on my favorite OS. :) It's splendid to know that our goal of making Dell's support for Linux the best in the industry--both on client and server systems--pays off in many ways, even if it's often behind the scenes. From our OEM <dell.com/oem> offerings to our conventional PowerEdge and client offerings to our cloud offerings complete with Dell Crowbar <crowbar.github.com>, we work closely with many of the companies that lined the floor of this year's expo hall, whether on helping with bettering the open source tools available to them (linux.dell.com/git is just a sampling) to bug fixes and patches to ensuring that IHV's provide fully open source drivers to selling hardware we fully support on their OS of choice.

Accelerate your Data warehouse query performance with SQL Server 2012 columnstore

$
0
0

The need is clear

We all know that data warehouse (DW) applications typically:

  • Support complex analytical query activities using very large data sets.
  • Fetch millions of records from the database for processing and take a long time to complete queries.

As data warehouses grow, high performance continues to be a critical need for most of the organizations. High performance is about achieving both speed and scale while dealing with increasing complexity and concurrency. Microsoft realized this and developed a new feature called the columnstore index in SQL Server 2012 that accelerates DW query performance significantly.

A new perspective

Columnstore index groups and stores column data and then joins all the columns to complete the index. This differs from traditional indexes which group and store row data. You might wonder how a simple change in the storage format of the table data can improve performance drastically. With some research, I found the below reasons.

  • It allows SQL Server to retrieve only the necessary columns from storage disk which reduces the amount of data retrieved from storage.
  • It uses a compression algorithm that eliminates redundancy. Because columnar data has a higher chance of being redundant, compression helps store less data in the SQL Server buffer pool.
  • It stores frequently accessed columns in the memory which reduces I/O to the storage.

For details, read the MSDN Library article “Columnstore Indexes”.

What’s in it for Database Admins

Columnstore Index offers query performance and space saving benefits for SQL Server data warehouse database applications.

  • Space Savings
  • Query Performance ( Index compression, Optimized memory structures)
  • Parallelism

There can be both row store index and column store index on the same table. The query optimizer will decide which index is the most appropriate to use.

Gotchas-Need to know

Some key limitations of columnstore index listed by Microsoft are:

  • Creating a columnstore index makes the table read-only. Columnstore index might not be the ideal choice for selective queries, which touch only one (or a few) rows or queries that lookup a single row or a small range of rows.Ex: OLTP workloads
  • Cannot be clustered. Only non-clustered columnstore indexes are available.
  • Cannot act as a primary key or a foreign key.
  • A columnstore index on a partitioned table must be partition-aligned.
  • A column store cannot be combined with
    •    page or row compression
    •    Replication
    •    Change tracking
    •    Change data capture
    •    File stream

It’s in the proof

To make sure columnstore really had an impact, I ran tests to evaluate CSI by implementing it on the two largest tables in a TPC-H database. These tests proved that using columnstore index on large tables improved the query response times significantly for TPC-H queries. Using columnstore index along with table partition makes use of the table partition benefits (such as DW loading and maintenance) as well as the columnstore index benefits of improved query execution times.

To get a better understanding of the test results and details on columnstore index performance tests and other DW tests, refer to our recent publication “Best practices for Decision Support Systems with Microsoft’s SQL server 2012 using Dell EqualLogic PS series storage arrays"

Assessing implications of applications migrations in Dell and VMware virtual desktop environments

$
0
0

As an industry we have learned a few lessons when it comes to designing and optimizing virtual desktop environments. Things like the importance of carefully designing the storage infrastructure, and carefully considering the “use cases” both in terms of users and the applications they use can make or break a desktop virtualization project.  Experienced IT pros will tell you that among IT projects, none so dramatically has the potential to impact end users than a desktop operating system (OS) deployment. Doing a poor job in an OS update or migration can leave users feeling frustrated, damage IT’s credibility and can ultimately create resistance to future projects.

It is no different when it comes to creating desktop images for virtual environments (virtual image). Migrating and optimizing images for virtual environments is not simply a matter of installing the Windows operating system or converting as existing physical desktop image to a virtual image. One of the items sometimes overlooked in desktop virtualization deployments is not taking the time and care to optimize the base desktop operating system for virtualization, and assessing the implications of the critical applications that will be used in the virtual environment. 

Given the preeminence of the Microsoft Office productivity suite as a core tool set in organizations today, applications such as Word, Excel, Outlook and PowerPoint can almost be considered to be primary “tools of the trade” for many employees. Office applications are used by the entire spectrum of users from task workers through knowledge workers to power users. It is no surprise that desktop “golden images” will often include one or more Office suite applications. So what happens if there is a desire/need to upgrade these “tools of the trade” applications? What if any implications are there, and what might be the impact(s) of a seemingly simple application upgrade be in a virtual environment.  

Our Dell desktop virtualization solution engineering team recently undertook testing to assess the implications related to user density of an upgrade from Office 2010 to Office 2013 in virtualized environments. Office 2013 leverages DirectX v10 multimedia capabilities, while Office 2007 and Office 2010 use older versions of DirectX. In this testing we were interested to understand the potential implications of moving to Office 2013 given the richer graphics capability of DirectX v10.

The results are interesting and there are definite user density and scaling implications of moving from Office 2010 to Office 2013. I won’t spoil the ending here!!  J  We published the test results and discussed the implications in this Dell TechCenter article–it’s an interesting read–check it out!  We concluded that careful consideration should be given to the implications of enterprise application migrations in virtualized environments.

This testing article is one small example of the testing, qualification and verification work we here at Dell are doing to optimize our solutions for desktop virtualization with VMware. Our goal is to help streamline your deployments with practical knowledge and best practices. Please enjoy.

Additional resources:

Dell DVS Enterprise with VMware Horizon View 5.2 Reference Architecture

Dell Desktop Virtualization Solutions web page  

August 8: Google+ Hangout with Puppet Labs 6.00 - 7.00 pm CEST (GMT + 2)

$
0
0

We are hosting our third OpenStack User Group Meetup on August 8th at 6.00 - 7.00 pm CEST (GMT + 2 hrs) at Hackerspace in Technopark Pomerania, an IT hub for local public and private tech ventures.

Speaker: Chris Hoge at Puppet Labs

Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to orchestration and reporting. Using Puppet, you can automate repetitive tasks, deploy applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.

(Please visit the site to view this video)

Register for the Event

Please register for the event at Meetup.com and join us in person in Szczecin (Poland) or via Google+ Hangout and IRC Chat (both willl be open from 6.00 - 07.00 pm CEST (GMT +2)).

Google Hangout

https://www.youtube.com/user/OpenStackPoland

IRC Channel

#openstack-pl @freenode

for non-IRC users: http://openstackpoland.aegis.org.pl/irc-channel/

Determining drive slot location on Linux for LSI-9207 SAS controllers

$
0
0

Posted on behalf of Jordan Hargrave, Linux Software Engineer

The LSI 9207 controller is a SAS HBA controller that is supported in several Dell PowerEdge servers. Due to Linux and SAS supporting hotplug, on this controller the drive ordering is nondeterministic. After rebooting, the ordering of sda, sda, sdc etc may not match the drive slots on the backplane. On most Linux distros the actual drive name does not matter as the installer will use UUIDs to correctly determine the root partition and mounted filesystems. However if a drive fails and needs replacing, determining the physical slot number of a drive is required.

On PowerEdge systems which use a SAS expander backplane (usually any system with > 8 drives) it is possible to determine the drive slot mapping using sysfs. On systems using a backplane without a SAS expander (systems with <= 8 drives), it gets a bit more difficult. The information can either be obtained from dmesg output, or requires using a utility from LSI. Using the sas2ircu utility will also work on systems without an expander, so using it is the best way to get drive mappings on any system. 

The following script uses the sas2ircu utility to get disk mappings on any system.

#!/usr/bin/perl

# sasscan script

# Determine the drive bay slot numbering of disks connected to LSI 92xx controllers

#

# Requires sas2ircu utility obtained from LSI website

open(BAR,"sas2ircu 0 DISPLAY |");

while()

{

    chomp;

    if (/^Device .*unknown/) {

        $ishdd = 0;

    }

    if (/^Device is a Hard disk/) {

        $ishdd = 1;

        $encl = $slot = -1;

    }

    if (/Enclosure #.*: (\d+)/ && $ishdd) {

        $encl = $1;

    }

    if (/Slot #.*: (\d+)/ && $ishdd) {

        $slot = $1;

    }

    if (/GUID.*: (.*)/ && $ishdd) {

        $guid = $1;

        $guid =~ s#^ ##g;

        $guid =~ s# $##g;

        $file = "/dev/disk/by-id/wwn-0x$guid";

        $disk = readlink($file);

        $disk =~ s#.*/##g;

        print "$encl:$slot $file $disk\n";

    }

}

The following script will display bay ID mapping of disks connected to SAS Expander backplane only. This script does not require the sas2ircu utility. 

#!/bin/sh

# Display drive bay for disks connected to SAS expander backplane

for name in /sys/block/* ; do

    npath=$(readlink -f $name)

    while [ $npath != "/" ] ; do

        npath=$(dirname $npath)

        ep=$(basename $npath)

        if [ -e $npath/sas_device/$ep/bay_identifier ] ; then

            bay=$(cat $npath/sas_device/$ep/bay_identifier)

            encl=$(cat $npath/sas_device/$ep/enclosure_identifier)

            echo "$name has BayID: $bay"

            break

        fi

    done

done

Resources:

LSI sas2ircu utility


Performance characterization for boot-on-demand scenarios in Dell and VMware desktop virtualization environments

$
0
0

A fundamental component for all of our Dell and VMware solutions for desktop virtualization is the execution of a variety of performance characterizations and evaluations. Dell and VMware engineering team collaborate closely behind the scenes to do these performance characterizations and to share and establish best practices and guidance in our reference architectures for desktop virtualization. We announced our renewed joint focus on solutions for desktop virtualization earlier this year  and we are now beginning to publish an assortment of best practices and unique performance testing results via blogs, wikis, and other social media. We like to think of this as our summer project for 2013! J This engineering work is a core part of “solution–izing” process, aka the taking a set of ingredients and creating a robust, trustworthy, end–to–end optimized solution. 

In this blog we introduce a Dell TechCenter wiki article outlining performance testing and analysis of various options for managing a pool of virtual desktops. As outlined in the article we assessed the impact of various boot-on-demand scenarios on resource utilization and end-user experience and the implications of these scenarios on power optimization. The assessment work looked at impacts on both storage and CPU utilization–and also at the end user experience … as implications for the user experience must always be carefully assessed in any virtual desktop environment.

This performance testing article is another example of the system wide approach to testing and performance qualification we do to optimize our solutions for desktop virtualization with VMware. Recently we posted this TechCenter article on testing results of migrating to Office 2013 from Office 2010 in virtual desktop environments. Our efforts for generating best practices and real-world optimization practices you can use are gaining momentum–make sure to check back at Dell TechCenter Client Virtualization site often for the latest technical articles. 

Additional resources:

Dell DVS Enterprise with VMware Horizon View 5.2 Reference Architecture

Dell Desktop Virtualization Solutions web page  

SCOM 2012 SP1 support, VRTX monitoring and more…with the new 5.1 release of the Dell Server Management Pack Suite!

$
0
0

Version 5.1 of the Dell Server Management Pack Suite adds support for System Center 2012 SP1 Operations Manager and new server platforms & firmware from Dell (including the exciting Dell PowerEdge VRTX systems), in addition to other enhancements and fixes. You can find more details about the suite, which you can use to monitor Dell servers from Microsoft System Center Operations Manager (SCOM) – links to detailed installation and usage instructions and lists of supported devices on our Dell TechCenter wiki page. We will be getting all the links on the wiki page updated with the latest information over this week. And while we do that, please continue to post on our forum and keep those suggestions coming in.

Let me, on behalf of the MP development team, share the highlights of the 5.1 release:

Microsoft System Center 2012 SP1 support

We have qualified the suite on the latest SP1 release of System Center 2012 Operations Manager. With the SP1 release of System Center 2012 Operations Manager, you can use Windows 2012 as the operating system for your management station and managed nodes.

Dell PowerEdge VRTX Monitoring

You can monitor the latest Dell PowerEdge VRTX systems using the suite’s Chassis Monitoring Feature.  We now provide separate chassis views (Diagram, Alert and State Views) for Dell PowerEdge VRTX and Dell M1000e chassis.  The blade servers in the VRTX chassis can be discovered using the in-band or out-of-band monitoring features and shown under the corresponding chassis slot using the Chassis Modular Server correlation feature. You can also view the alerts related to storage monitoring  in the Dell PowerEdge VRTX Alerts View.

Latest and Greatest Dell Server Platform support

We have added support for Dell OpenManage Server Administrator (OMSA versions 6.4 through 7.3) for the In-Band monitoring feature; we also support the enhanced EEMI event mode of OMSA now. New alerts and device attributes from the latest CMC and DRAC firmware versions are also
added for the Chassis, DRAC and Out-Of-Band monitoring features.

Easier, more streamlined installer

You now need to run the installer on only any one of your management servers to get the Feature Management dashboard in the SCOM console (In the previous release, we required you to run the suite on all the management servers to get the dashboard). Also, now configuring credentials during the installation for the Dell Device Helper Utility that is part of the suite is now optional – needed only if you use a monitoring feature that needs it.

You can use the Feature Management dashboard to enable,disable and customize the suite’s Monitoring Features.

Monitoring Features

Here's a quick overview of the Dell Server Management Pack Suite’s Monitoring Features, provided on SCOM 2007 R2, SCOM 2012/SP1 and System Center Essentials 2010:  

Server (In-band) Monitoring: Using this feature, you can inventory and monitor Dell Servers running Windows through OMSA using a SCOM agent installed on the server.

Server (Out-of-band) Monitoring: You can use this feature to inventory and monitor the 12th Generation of Dell PowerEdge Servers, running any OS/Hypervisor, through the iDRAC7 controller using WSMAN and SNMP. You don’t need a SCOM agent to be installed on the server for the hardware monitoring.

Chassis Monitoring: To monitor the CMCs on Dell PowerEdge VRTX chassis and Dell M1000e chassis using SNMP. We also provide slot inventory information for the chassis.

Chassis Modular Server Correlation: You can use this feature for a drill-down view of the servers (discovered using either in-band or out-of-band monitor) installed on the slots of the chassis.

DRAC Monitoring: To monitor the Dell Remote Access Controller (DRACs –DRAC5, iDRAC6, iDRAC7…) using SNMP.

Please do visit our Dell TechCenter wiki page for more information and forum for feedback and comments.

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #41

$
0
0

“The Elway Gazette”

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (Florian_Klaffenbach@Dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks! 

 


Featured Posts of the Week!

Storage Management–File Server Using VMM 2012 R2 by Lai Yoong Seng

Upgrading Your DELL Compellent Storage Center Firmware (Part 2) by Didier van Hoye

How to Store Hyper-V Virtual Machines on SMB 3.0 Storage by Aidan Finn

Microsoft Releases Windows Server 2012 R2 Licensing Information by Aidan Finn


Hyper-V

How to Store Hyper-V Virtual Machines on SMB 3.0 Storage by Aidan Finn

Testing SMB Live Migration on WS2012 R2 Hyper-V by Aidan Finn

 

Lync Server

Automating Microsoft Lync using Windows PowerShell by Jan Egil Ring

SefaUtil GUI by Johan Veldhuis

 

Office 365

Videocast: Öffentliche Webseite gestalten der Webseitenelemente in German by Kerstin Rachfahl

 

PowerShell

#PSTip Swapping the value of two variables by 

#PSTip Multiple variable assignments by 

#PSTip Initializing multiple variables at once by 

 

Sharepoint 

My CMSWire Article on The Future of SharePoint On-Premises is Published by Andrew Connell

 

System Center Dataprotection Manager

SC2012 SP1 DPM Installationsprobleme – deutschsprachiger Windows Server in German by Daniel Neumann

   

System Center Orchestrator

What Is Microsoft System Center 2012 – Orchestrator? by Damian Flynn

 

System Center Virtual Machine Manager

Delete obsolete virtual networks with dependency in VMM 2012 SP1 by Romeo Mlinar

Storage Management–File Server Using VMM 2012 R2 by Lai Yoong Seng

 

SQL Server

#PSTip Retrieve all SQL instance names on local and remote computers by Ravikanth Chaganti

 

Windows Server Core

DOS-Angriff für jedermann: AD-Konten sperren in German by Nils Kaczenski

Microsoft Releases Windows Server 2012 R2 Licensing Information by Aidan Finn

 

Tools

Upgrading Your DELL Compellent Storage Center Firmware (Part 2) by Didier van Hoye

José Active Directory Reporting: English version is live now! by Nils Kaczenski

TechChat: Host Integration Tools for Microsoft 4.6

$
0
0

Join us today on the Dell TechCenterTechChat as we talk about the Host Integration Tools for Microsoft version 4.6.  We will have experts online to answer all your questions and talk about the new features.  Here is a quick look at what to expect on today's chat.

HIT/Microsoft 4.6

Enable efficient data protection and simplify EqualLogic SAN management and operation with Host Integration Tools for Microsoft 4.6, featuring centralized data protection and management features to simplify administration of multiple hosts and EqualLogic SAN groups in your infrastructure. This new release brings Storage Management integration with System Center Virtual Machine Manager (SCVMM) 2012 SP1 to further extend management capabilities for virtual Microsoft environments. In addition, this release brings support for Microsoft® Exchange 2013 and Microsoft® SharePoint 2013.

Talk to you this afternoon at 3:00 PM Central.

Wrap Up: Google+ Hangout with Puppet Labs

$
0
0

We hosted our 3rd OpenStack User Group Meetup last week on August 8 with Chris Hoge from Puppet Labs. If you were not able to attend, here’ s the recorded session:

We are going to host our next OSUG with Google+ Hangout and IRC Chat in September, watch out for the date and agenda!

Near Real Time Extract Transform Load (ETL) using Boomi Web Services Server Connector

$
0
0

Overview

The Boomi Web Services Server connector is a listener that accepts HTTP and SOAP requests in real-time and initiates Dell Boomi AtomSphere processes. When a process containing this connector is deployed to an Atom, the Atom’s internal web server will listen for documents based on configurations made in the Web Services Server operation.

This connector has multiple uses like web application updating/deleting/ inserting records in database, database event triggering notification to user, salesforce to database and so on. This blog explores implementing this connector for extracting, transforming & loading (ETL) data from transactional database to data warehouse at real time. It is assumed the user is familiar with boomi concepts, building basic processes, database connectors & map functions.

Configuration for ETL data from OLTP database to Data Warehouse

As shown in figure-1 SQL triggers are fired when there is any modification to the OLTP database. The SQL trigger calls the boomi web services server connector which then initiates the boomi process. The boomi process connects to the OLTP database gets the changes to data and applies these changes to the Data Warehouse. With multiple insert/delete/updates happening on the transaction database, multiple boomi  process are simultaneously executing to ELT data in the data warehouse.

                        Figure 1.     Test configuration for ETL data from OLTP database to Data Warehouse


 

SQL trigger

For our use case lets consider a table in the OLTP database which when modified would trigger changes to the fact & destination tables in the Data Warehouse. The code for the SQL trigger is as follows:

CREATE TRIGGER [dbo].[trigger_name]

   ON  [dbo].[table_name]

   AFTER INSERT

AS

BEGIN

--Declare all the variables

       declare @object int

       declare @return int

       declare @valid int

       declare @uri varchar(2000)

--Set the variables

       set @valid = 0

--Delete all records in the temporary table

 DELETE FROM tempins

--Insert the primary key of the inserted/deleted/updated records in a temporary table

       INSERT INTO [dbo].[tempins](CA_ID)

       SELECT CA_ID from inserted

--Url of web services server connector. For more information on building the url check the section “Getting the URL that your SQL trigger calls”

       set @uri = 'http://<ipaddress OLTP server:port>/ws/simple/<Object name>?type=Insert'

 --create the XMLHTTP object

       exec @return = sp_oacreate 'MSXML2.ServerXMLHTTP', @object output

       if @return = 0

       begin

--Open the connection

              exec @return = sp_oamethod @object, 'open', NULL, 'POST', @uri, false

              if @return = 0

              begin

--Send the request

                     exec @return = sp_oamethod @object, 'Send' , NULL

                     PRINT @return

              end

              if @return = 0

                     begin

                           declare @output int

                           exec @return = sp_oamethod @object, 'status', @output output

                            if @output = 200

                           begin

                                  set @valid = 1

                           end

                     end

              else
PRINT 'no output'

       end

       else

       print 'no xmlhttpobject'

 --Destroy the object

       exec sp_oadestroy @object

       if @valid = 1

              print 'valid'

       else

              print
'invalid'

END

GO 

Boomi Process

Figure-2 shows the boomi process which starts once the SQL trigger executes. The Web services server connector initiates the process. The database connector connects to the OLTP database gets the ids from the temp tables and queries the related fact and destination tables to get all the details of the records. The map shape maps these fields to the fact and destination fields in the Data Warehouse and changes are applied. For more information on building Boomi processes and getting started refer to the video http://www.youtube.com/watch?v=ZAb9Ev3tuMw&feature=youtu.be or the boomi help link http://help.boomi.com/atomsphere/GUID-B522EE93-E8A2-43CC-9D3E-EF37371AEF32.html

            Figure 2.     Boomi ETL of data from OLTP database to Data Warehouse based on specific event

The sections below describe the steps to follow specifically when using the web services connector.

Getting the URL that your SQL trigger calls

1. To get the url to ping from the SQL trigger, navigate to Manage Atom Management in your Boomi account.

2. Select the atom which is created in the network where the database resides. In the Atom properties window, select Shared Web Server settings option as shown in figure-3.

   Figure 3.     Shared Web Server Settings

3. Navigate to the user management window as shown in figure- 4. Copy the url displayed in the Base URL field. It should be in the form : 'http://<ipaddress OLTP server or servername:port>

Figure 4.     User Managemment

  

4. Navigate to the web services server operation and copy the simple url path as shown in figure-5 Add it to the url in step 3 to get the complete url that the SQL trigger calls 'http://<ipaddress OLTP server or servername:port>/ws/simple/<object name>

Figure 5.     Web Services Server Operation

5. Optionally to pass parameters in the url use ‘?’ followed by the parameter name=value. For e.g. 'http://<ipaddress OLTP server:port>/ws/simple/<Object name>?param1=Insert&param2='

For more information on the boomi web service connector watch the videos: http://training.boomi.com/display/TDASH/Training+Video+Library

Accessing the parameters in Boomi process

Any parameters passed in the url can be accessed in the boomi process as “Dynamic Process Property” with query_ added before the parameter name as shown in figure-6

Figure 6.     Dynamic Process Property

Performance

For a simple scenario of inserting/updating/deleting data from a table in the transaction database and in turn insert/delete/update on the fact & dimension table in the data warehouse had the following average response time:

  • 4-5 sec for insert/update/delete for 1 to 15 rows
  • 80.458 sec ( 1.3 min) for inserting 55000 rows
  • 350.857 sec ( 5.8 min) for updating 55000 rows
  • 591.532 sec ( 9.8 min) for deleting 55000 rows

The insert/update/delete statements were executes immediately. Three boomi processes were called and were executing  simultaneously. The delete possibly took more time as it was waiting for SQL server to complete the updates before deleteing the records.

                               
Figure 7.     Average Response Time for insert/update/delete for 1 to 15 rows


                                 
Figure 8.     Average Response Time for insert/update/delete for 55000 rows

Conclusion

In todays complex service-oriented architecture, bringing data together and making sense of it becomes increasingly difficult. Real-time data integration is being increasingly used by companies to improve their business and better serve their customers.The challenge with traditional batch systems is that processes are executed usually once daily and invariably overnight to reduce load on the architecture. The advantages of transitioning to a near-real-time system from the traditional nightly batch processing include improved knowledge of how business is performing and offering services that outperform competitors.

The procedure described in this blog shows how Boomi can be utilized in a near  real-time information and on-demand ETL processing that can have a huge payback for certain business scenarios. Depending on the complexity of the target database, creating and managing all the database triggers and corresponding database processes can increase the  overhead to maintain operations and potentially could have an impact on the overall performance.  

Expert Oracle RAC 12c now in print

$
0
0

Dell Engineer co-authoring the new Apress book “Expert Oracle RAC 12c

Kai Yu, a Senior Principal Engineer from Dell Oracle Solution Engineering organization has co-authored the latest Oracle 12c Real Application Clusters book “Expert Oracle RAC 12c” which is scheduled to publish today (August 14, 2013). This book is a hands-on guide to help readers understand and implement Oracle Real Application Clusters (RAC), and to reduce the total-cost-of-ownership (TCO) of a RAC database. As a seasoned professional, you are probably aware of the importance of understanding the technical details behind the RAC stack. This book provides deep understanding of RAC concepts and implementation details that you can apply towards your day-to-day operational practices. You’ll be guided in troubleshooting and avoiding trouble in your installation. Successful RAC operation hinges upon a fast-performing network interconnect, and this book dedicates a chapter solely to that very important and easily overlooked topic. Kai has been responsible for writing the following chapters:

Chapter 1: Overview of Oracle RAC

Chapter 2: Clusterware Stack Management and Troubleshooting

Chapter 4: New Features in RAC 12c

Chapter 5: Storage and ASM Practices

Chapter 9:  Network Practices

 

On these chapters, Kai has leveraged Oracle RAC database best practices developed in Dell Oracle Solution Engineering team and Dell IT Global Database Management team which he has worked for more than 14 years. For more information about Dell Oracle Solution Engineering, Please visit Dell TechCenter Oracle solution page: http://www.delltechcenter.com/oracle.



About the author:

Kai Yu is a Senior Principal Engineer and technologist in Dell Oracle Solutions Engineering Lab specializing in Oracle RAC, Oracle Virtualization and Cloud. With 18+ years of experience of working on Oracle technology, he has implemented and managed many large mission critical Oracle RAC databases including those in his tenure with an IT organization that has more than two thousand RAC databases.  Kai is a well-known author of technical articles and a frequent speaker at Oracle conferences worldwide such as Oracle OpenWorld, Collaborates, and keynote at Oracle Architect Day, etc. Kai has served IOUG Oracle RAC SIG  as president and two chair positions, and served the IOUG Collaborate conference committee. He was awarded the Oracle ACE Director in 2010 and theOracle ACE Spotlightin 2011 by Oracle Technology Network (OTN) and the 2011 OAUG Innovator of Year Award by the Oracle Applications User Group (OAUG). In 2012 Oracle Magazine awarded him the  Oracle Excellence Award: Technologist of the Year: Cloud Architect. Kai has been active in sharing his Oracle knowledge on his Oracle blog http://kyuoracleblog.wordpress.com/.

 


Desktop Virtualization: Leveraging Dell EqualLogic hybrid blade arrays for a “data center in a box” VDI solution

$
0
0

Earlier we have been discussing the VDI use cases of the three different series of Dell EqualLogic hybrid storage arrays we offer.

As you recall, EqualLogic PS-M4110XS series hybrid arrays provide a blade form factor suitable for a comprehensive, self-contained VDI solution within a modular and compact blade enclosure. Together with Dell PowerEdge blade servers and Dell Force10 blade switches, these hybrid blade arrays create a “data center in a box” for VDI deployments.

This approach helps organizations reduce virtual desktop deployment and operational costs through efficient use of switching resources, minimized cabling, and consolidated management.

Recently we created a Reference Architecture that presents this unified compute, storage, and switching all-in-blade-form-factor platform for hosting and running VDI workloads.

This Reference Architecture demonstrates how a modular 1,000 standard user virtual desktop environment – all self-contained within a Dell PowerEdge M1000e blade chassis – can be deployed in a VMware Horizon View 5.2 VDI infrastructure leveraging 12 PowerEdge M620 blade servers, four Force10 MXL blade switches, and two EqualLogic PS-M4110XS hybrid blade arrays.


Figure 1: Reference Architecture compute, storage and network layout in the blade chassis

In the test environment, two PS-M4110XS arrays delivered approximately 19,000 IOPS during boot storm, 9,000 IOPS during login storm, and 4,400 IOPS during steady state with satisfactory performance results across all layers – user layer, hypervisor layer, and storage layer – of the stack.

This paper provides the details of the storage I/O characteristics under various VDI workload scenarios like boot and login storms along with performance characteristics throughout the VDI stack (for example, ESXi server performance and user experience).

Check it out here.

Citrix XenServer: Hardware Compatibility List, clarification on the use of OEM-DELL as a filter term

$
0
0

written by Jerry Clement, Dell Enterprise OS Engineering, Project Manager/Engineer for Citrix XenServer

 

Here's an interesting scenario. It's not a BIG deal, but knowing about it might help in the future when looking up Dell Hardware on the Citrix XenServer Hardware Compatibility List.

One of our customers wanted to evaluate XenServer on one of his brand new Dell Servers. So he wanted to see which servers were certified. He went to  http://hcl.xensource.com/.

He then selected "Servers" from the choices at the bottom of the page. This brought him to the Home>Servers page of the HCL.

On the left you can "Apply Filters" to help you search the listing. The first set of filters are Product Edition:

hcl_applyfilters

The customer selected OEM-DELL and then clicked on Apply Filters. Not a single Dell 12th Generation PowerEdge Server appeared on the list. There is a reasonable explanation for this.

XenServer currently comes in only one Product Edition, and that is Retail and must be installed to a hard drive. The OEM-DELL product edition of XenServer was first released back in 2008 and was a custom iso installed to a SD card in the server. This edition was named XenServer Enterprise Embedded and it was discontinued with the launch of XenServer 5.6 Enterprise Retail Edition back in 2010.

Engaging a filter by clicking on OEM-DELL will only reveal those systems that were certified with the XenServer Enterprise Embedded version. This excludes all Dell 12th generation PowerEdge Servers. Our recommendation is to skip the Product Edition filter leaving it on the default of "Any."

Then go to the Vendor filter and select Dell:

image

When you click on "apply filters" all Dell servers with a XenServer Certification will be listed. Click on the Server of your choice to see which versions of XenServer have been certified for that system.

If you are looking for Servers certified with a specific version of XenServer, include the Product Version filter:

image

Select the Product Version number you are planning to use in your environment and this will list out all Dell PowerEdge Servers certified for that specific version.

Comment on this post if there are other questions regarding XenServer Compatibility with Dell PowerEdge Servers.

Dell PowerEdge R820 VMmark result

$
0
0

This post was written by Waseem Raja, an engineer on the Dell Solutions Performance Anaylsis team

I’ve gotten a lot of questions around a VMmark result on the Dell PowerEdge R820.  I’m happy to announce that our Solutions Performance Analysis team has produced a VMmark 2.5 result of 19.79 @ 18 tiles on the PowerEdge R820.

The Dell PowerEdge R820 is a high-performance four-socket, 2U rack server, which is designed for dense virtualization and scalable database applications. Our previously published VMmark result was a two-socket result based on the Dell PowerEdge R720. It achieved a score of 11.39 @ 10 tiles.  Each tile in VMmark 2.5 represents 8 virtual machines, so we can see that the four-socket R820 was able to sustain 80% more virtual machines than the two-socket R720:

For this result, two PowerEdge R820 servers were each configured with four Intel Xeon E5-4650 processors and 512 GB of memory. The load-driving clients were Dell PowerEdge R410 servers, and they were connected together using Dell Networking switches. The virtual machines were hosted on two Dell PowerEdge R720 servers, each containing three Fusion-io 2.4TB ioDrive2 Duo cards.

This is the first Dell VMmark result to use the Fusion-io ION Data Accelerator as the storage backend. The ION Data Accelerator software transforms industry-leading open server platforms into powerful shared acceleration devices for vertical applications or clusters. Multiple ioDrive2 Duos deliver the same industry-leading low latency, while achieving gigabytes of bandwidth and hundreds of thousands of IOPS with a single server.

We consider this result to be a great example of how an end-to-end Dell solution provides optimal performance for our customers.

VMmark 2.5 is an industry standard benchmark used to measure the performance and scalability of a virtualized environment. It constitutes application-level workloads along with platform-level workloads (vMotion, Storage vMotion and Deploy operation) for a realistic measure of virtualization performance.

For more information on this result, the full disclosure is attached to this blog post.

Dell Open Source Ecosystem Digest #26. Issue Highlight: “Yes, you can: make money selling OpenStack”

$
0
0

“Yes, you can: make money selling OpenStack”: please find the blog post at the Mirantis page. Enjoy reading!

DevOps

OpenStack

Hadoop

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

If you have any feedback, suggestions, ideas, or if you’d like to contribute - I’ll be happy to hear back from you.

Twitter: @RafaelKnuth

Email: rafael_knuth@dellteam.com

Automate Deployment of Hadoop Clusters with Dell Crowbar and Cloudera Manager

$
0
0

by Mike Pittaro

Deploying, managing, and operating Apache Hadoop clusters can be complex at all levels of the stack, from the hardware on up. To hide this complexity and reduce deployment time, since 2011, Dell has been using Dell Crowbar in conjunction with Cloudera Manager to deploy the Dell | Cloudera solution.

Cloudera Manager does a great job of deploying and managing the Hadoop layers of a cluster, but it depends on an operating system to be in place first. Meanwhile, to complement those capabilities, Dell Crowbar is a complete automated operations platform, designed to deploy layers of infrastructure from bare-metal servers all the way up the stack.

In the Dell | Cloudera Solution, we use Crowbar to provision the hardware, configure it, and install Red Hat Enterprise Linux and Cloudera Manager -- then, Cloudera Manager takes over to guide the user to a functioning cluster. Furthermore, we use the Cloudera Manager API [http://cloudera.github.io/cm_api/] to entirely automate the cluster setup process.

In this post, I’ll provide more details about how we have successfully integrated the Cloudera Manager API with Dell Crowbar.

Crowbar overview

Dell Crowbar, inspired by DevOps principles, is based on the concept of defining hardware and software configuration in code, just like applications. Crowbar uses a modular approach, where each component of the stack is deployed as an independent unit. The definition of each component is a Crowbar module called a “Barclamp”.

Figure 1 shows the Barclamps included in a typical Hadoop installation -- including hardware configurations and functions like DNS and NTP -- all the way up to the Cloudera components.

Figure 1 Crowbar Barclamps for Hadoop

During the deployment process, the Crowbar interface is used to create a “proposal” based on a Barclamp. Within the proposal editor, the hardware nodes are assigned their intended roles, and then the proposal is saved and applied. A single node can have several roles assigned depending on its function, and Barclamps are aware of dependencies between roles. For example, in order to deploy CDH, the single Cloudera Manager proposal is applied and subsequently Crowbar takes care of all the other requirements.

Cloudera Manager Barclamp

Figure 2 shows the Cloudera Manager proposal within Crowbar. The Barclamp defines seven roles available for nodes within the cluster:

  • Clouderamanager-cb-adminnode – the node running the Crowbar
  • Clouderamanager-server – the node running the Cloudera manager server
  • Clouderamanager-namenode – nodes running the name server, whether active/passive or quorum HA is being used.
  • Clouderamanager-datanode - the cluster data nodes
  • Clouderamanager-edgenode – an edge or gateway node for client tools
  • Clouderamanager-ha-journaling node – a quorum-based journaling node for quorum HA
  • Clouderamanager-ha-filernode – an NFS filer node, for active/passive HA using a shared NFS mount

F2 Cloudera Manager Barclamp proposal showing nodes and roles

In the Crowbar interface, available hardware nodes are dragged to the appropriate roles in the proposal, and then the proposal is applied. At that point, Crowbar analyzes dependencies and makes any required changes to the nodes. If the hardware nodes are new, these changes might involve hardware configuration and a complete OS install. If the nodes already have an OS installed, the changes simply might involve installing some additional packages. 

In the proposal editor, the roles defined within Crowbar closely correspond to the typical roles of nodes in a Hadoop cluster and can be mapped almost directly to their corresponding Hadoop services within Cloudera Manager. So, we decided to use this information to integrate with the Cloudera Manager API. 

The integration is enabled by setting the deployment mode in the proposal to “auto mode”, as shown in Figure 3. In auto mode, applying the proposal will configure the hardware and OS as necessary, install Cloudera Manager on the edge node, install the agents on the remaining nodes, and then use the Cloudera Manager API to create the cluster based on the Crowbar roles. The install also sets up a local Yum repository for all the CDH packages, so the install can proceed without full Internet access. 

F3 Auto deployment mode in the Cloudera Manager Barclamp

Behind the Scenes with the API

Crowbar is implemented primarily in Ruby, and the Barclamps are wrappers around Opscode Chef, which also uses Ruby for its recipes and cookbooks. This means that the automatic deployment implementation in Crowbar would also be in Ruby.

The Cloudera Manager API provides client libraries for Java and Python, but not Ruby, so the first step in implementing the integration was to create a Ruby client library for the API. The API is standards based, using REST and JSON as a data format, so this was a relatively straightforward process. The Ruby library parallels the Python implementation. (The library is currently Apache licensed as part of Crowbar, but could be split out in the future if there’s community interest.)

The actual cluster deployment logic is in the file cm-server.rb. It turns out to be relatively straightforward, and follows the flow described in “How-to: Automate Your Hadoop Cluster from Java”. The information about the nodes, their roles, and their configuration are already available in Crowbar, so the code primarily iterates through the Crowbar data structures, and makes the calls to create the cluster, the HDFS service, and the MapReduce Service. Since Cloudera Manager also uses a role-based approach, this mapping turns out to be very clean.

There’s even some logic there to handle licensing. The Cloudera Manager API corresponds directly to Cloudera’s packaging, which includes Standard (free) and Enterprise (paid support, with a 60-day trial option) versions. If a Cloudera license is entered in the Crowbar proposal it is used; otherwise, the Enterprise trial license is activated. Our current integration uses the free APIs for HDFS and MapReduce configuration.

So far, the automatic deployment capability has significantly reduced the time to deploy a cluster, especially if there are a large number of nodes. More important, it eliminates the error-prone manual process of re-entering the node and role information into the Cloudera Manager wizard. (This is a new feature, and we’re still collecting feedback on enhancements.)

We currently set up the core HDFS and MapReduce services for the cluster. In the future, we will likely configure more services as part of the automatic deployment. Cloudera Manager role groups provide another interesting opportunity for further integration, since we have information about the actual hardware configuration in Crowbar, and can start grouping nodes together based on hardware-specific features.

In the meantime, Dell and Cloudera continue to collaborate on other new features to make sure the various systems and technologies integrate effectively to provide the easiest option to setup and manage Hadoop clusters.

Mike Pittaro (@pmikeyp) is Principal Architect on Dell's Cloud Software Solutions team. He has a background in high performance computing, data warehousing, and distributed systems, specializing in designing and developing big data solutions.

Viewing all 1001 articles
Browse latest View live




Latest Images