Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Microsoft Masterminds Episode 1: Jeff Wouters on the Power of PowerShell

$
0
0

Welcome to the first episode of tech talks with outstanding Microsoft community members. Most upcoming interviews will be with Microsoft Most Valuable Professionals (MVPs) but I am not limiting myself to those outstanding Microsoft tech experts. Take Jeff Wouters for example: Jeff is recognized by the Microsoft community as a PowerShell Magician – you want to learn why? Read on … and enjoy!

Me (left) talking to Jeff Wouters (right) in Hamburg, Germany during the E2EVC Conference 2012

Flo: Jeff, I am happy to talk with you! Can you tell us about yourself and about your work?

Jeff: I am living in Holland, my main focus is Hyper-V in SystemCenter and I ran into some limitations in the product, so I went to PowerShell. With a little bit of googling … sorry: binging … I was able to find a lot of documentation. The PowerShell community is huge, the information they share is even better, so I was able to create some easy scripts. Then I had two sessions with Don Jones, he is a PowerShell MVP … one session was about Active Directory management and one about PowerShell Remoting. I then saw the power of PowerShell and I was like: “I have to learn PowerShell!”

Windows Server 2012 and SystemCenter 2012 just came out, and I see PowerShell integrated into EVERYTHING. Basically, administration tasks are possible in the GUI … you can still click it. But for all the fun stuff you have to turn to PowerShell and … I love it!

Flo: Give us some examples of the power of PowerShell.

Jeff: When I started with PowerShell, I used it the wrong way.  I knew VB Script and then I started to use PowerShell code with the VB Script way of thinking.  For example, with VB Script it took me about 20 lines of code, with PowerShell it was still 18 lines. Then another Microsoft MVP Jeffrey Hicks showed me how to use the pipeline, switch statements and stuff like that … and suddenly 18 lines of code became three. Even those three lines can be trimmed down to one line if you like, but then the code will become harder to read and you have to know PowerShell really well to understand it.

Flo: What are the major improvements from PowerShell 2008 to 2012 … in my opinion it’s that the syntax didn’t change.

Jeff: Ok syntax … well (laughter). A little bit history. PowerShell version 1 had basically one error: “access is denied”. So PowerShell version 2 came with a lot more errors like the errors explaining what you did probably wrong and stuff like that, which was a very good improvement. Next, PowerShell came out with Remoting and all produts were suddenly using PowerShell Remoting. When PowerShell version 3 came out, you had PowerShell workflow, based on the Windows Workflow Foundation (WF). It will basically enable you to execute tasks in parallel, because natively PowerShell does things in sequence, and is very powerful with bulk tasks like 3,000 live migrations. You want do that in parallel preferably.  Further you have got member enumeration, which is an easy way to use properties to an object to create.

Flo: Have you ever used PowerShell web access?

Jeff: Only in demos, not in the production environment yet because my current customer doesn’t have a fully functional PKI infrastructure.  When I do this, I want to do it secure. Opening PowerShell to the big bad world is not a good idea. I know the power of PowerShell and with one command I can take your entire environment down if your Remoting is enabled. So that’s not something I want to give to everybody.

Flo: Do you use PowerShell script environment or Notepad?

Jeff: When I write a very big script, functions and stuff like that, I use either Primo Script or PowerShell Plus. For scripting together with other system admins who are not very fluent with PowerShell yet, I use PowerShell ISE in version 3 because it has intelligence and it makes things lot more easily. When I need to do some quick and dirty scripting, I use Notepad.

Flo: Is there something PowerShell can’t provide?

Jeff: It’s there something it can’t provide? (laughter). If anything can be done inside a product, it can be done in PowerShell! A lot of things are even removed from the GUI and the only solution is PowerShell.

Flo: Can you give an example, please?

Jeff: Yeah, take Hyper-V. You are able to use affinity to set virtual machines together on a node or … even anti-affinity to separate them from each other. Let’s say for example a SQL cluster with two virtual machines … you do not want those same machines on the same node. That’s where you use anti-affinity and you can only configure that through PowerShell. You can configure affinity through GUI but it’s a lot of clicking all over the place and anti-affinity is basically two, three lines of code.

Flo: Dell has many PowerShell Plugins and CLIs such as iDRAC and Power Center. Are you using such tools to integrate your environment with PowerShell?

Jeff: Let’s say … if I was a big system integrator and I was using Dell products for all my customers, I would script it once and make it a standard set up, deploy it at my customer site, hit my script … sit back, take some coffee … and when I finished my coffee my entire environment is up and running.

For most system admins I would recommend using SystemCenter. Lots of system admins currently do not understand PowerShell, they don’t want to and they will never do so in their opinion.

Flo: You should change that …

Jeff: That is exactly my challenge. In Holland, I am doing some workshops in the evening to make customers enthusiastic about PowerShell. I recently had a workshop with a company that does hosting, and they have more than 300 web servers … Windows Server IIS and they deployed them manually. They have an automated deployment through PXE boot but enabling those server roles, enabling IIS packages and importing them, that’s all done by hand! CLICKING! So I asked them to give me some of the packages, I prepared a demo and the entire writing of the script took me 40 minutes. Deploying it was: hitting enter and sitting back.  Right now I can use that single script, put the packages in a certain directory and … I wouldn’t say it’s universal … but any package you put in that directory, you save it there, you hit the script, you sit back and your package for the website would be deployed on all you web servers. So, it’s not 300 times clicking. It’s pressing enter, sitting back and you are done. All you have to do is preparation for the script and then the next thing is enjoying the coffee.

Flo: We should definitely catch up shortly … and show customers what PowerShell can do on Dell products.

Jeff: Yeah, sure!

Flo: Sounds good! Thank you very much and … talk to you soon.

Recommended PowerShell Resources

Jeff Wouters, PowerShell Magician

Twitter: @JeffWouters
Blog: http://jeffwouters.nl/

Don Jones, Microsoft MVP

Twitter: @concentrateddon
IT Pro Column: http://www.windowsitpro.com/topics/powershell-scripting/don-jones-on-powershell
Don’s Book on PowerShell: http://jdhitsolutions.com/blog/2012/11/learn-powershell-3-in-a-month-of-lunches-now-available/

Don’s Shell Hub: http://shellhub.com/

Jeffery Hicks, Microsoft MVP

Twitter: @JeffHicks
Blog: http://jdhitsolutions.com/blog/
PowerShell Communities

http://powershell.org/
http://powershell.com/cs/


Dell Smart Plug-in version 3.0 for HP Operations Manager version 9.0 for Windows is Available

$
0
0

Dell Smart Plug-in (SPI) version 3.0 for HP Operations Manager version 9.0 for Windows is available now!

The SPI v3.0 delivers new features for monitoring Dell PowerEdge Servers, Dell Storage devices such as EqualLogic and PowerVault MD Storage Arrays, Dell Remote Access Controller (DRAC) and Chassis management Controller (CMC) devices in environments managed by HP Operations Manager for Windows. Dell SPI v3.0 adds support for agent-free, out-of-band monitoring for the 12th generation of Dell PowerEdge servers via their embedded server management component, the integrated Dell Remote Access Controller (iDRAC), version 7 with Lifecycle Controller. The Dell SPI v3.0 also provides traditional agent-based monitoring with Dell OpenManage Server Administrator (OMSA) for 9-12G PowerEdge servers.

The key new features in Dell SPI v3.0:

  • Agent-Free monitoring of Dell PowerEdge 12th Generation Servers using Licensing.
  • Monitoring Dell EqualLogic and MD Storage Arrays.
  • Monitoring Dell Remote Access Controller (DRAC) and Chassis Management Controller (CMC) Devices.
  • Monitoring the Dell Connections License Manager (DCLM) availability and License statistics.
  • Periodic and instantaneous health monitoring for Dell devices.
  • Knowledge Base articles for supported Dell device SNMP Traps and DCLM alert messages.
  • SNMP Trap support for Dell EqualLogic, Dell PowerEdge 12th Generation Servers and Dell Remote Access Controller (DRAC) devices.
  • 1x1 console launch tool support for EqualLogic, MD Storage Arrays, Dell PowerEdge 12th Generation Servers and CMC and DRAC devices.
  • Support for Dell Tools(Consoles Launch);
    • Dell OpenManage Essentials (OME) Console.
    • Dell OpenManage Power Center (OMPC) Console.
    • Dell Connections License Manager (DCLM) Console.
    • Dell Warranty Report.

The product download links and product documentation are available on the Dell TechCenter Dell OpenManage Connection for HP Operations Manager wiki page. There are also some new Technical Articles explaining how to use the Dell SPI to manage Dell devices, Agent-Free Server Monitoring, Dell SPI new features, etc. We encourage you to continue this conversation in the OpenManage Connections for 3rd Party Console Integration Forum if you have any comments or other feedback.

So Say SMEs in Virtualization and Cloud: Episode 49 Dell | VMware - Retro

Dell Open Source Ecosystem Digest: OpenStack, Hadoop & More 5-2012

$
0
0

Welcome to the fifth issue of the Dell Open Source Ecosystem Digest, a collection of relevant technical content around OpenStack and Hadoop authored by Dell as well our partners and friends. Enjoy readiong ... your feedback is appreciated greatly!

OpenStack

Cloudscaling: “Cloudscaler takes on leadership role at NANOG ” by Robert Cathey
http://www.cloudscaling.com/blog/company/cloudscaler-takes-on-leadership-role-at-nanog/

Mirantis: “Working through DNS and DHCP service configuration issues in OpenStack Nova’” by Yury Taraday http://www.mirantis.com/blog/dns-dhcp-service-configuration-openstack-nova/

MorphLabs: “Private Cloud in a Box Webinar Recap Recap and Video” by Yoram Heller
http://www.morphlabs.com/blog/private-cloud-in-a-box-webinar-recap-and-video/

OpenStack Foundation: “OpenStack Outreach Program for Women Accepting Candidates”
http://www.openstack.org/blog/2012/11/openstack-outreach-program-for-women-accepting-candidates/

OpenStack Foundation: “Report: November month OpenStack meetup, India”
http://www.openstack.org/blog/2012/11/report-november-month-openstack-meetup-india/

Hadoop

Cloudera: “Contributing to Hadoop”
http://www.youtube.com/watch?v=Ee6T80bXmLk&feature=youtube_gdata

Cloudera: “Get started with Hadoop using Cloudera Enterprise - Part 1”
http://www.youtube.com/watch?v=RHu2qVMKh-U&feature=youtube_gdata

Cloudera: “Get started with Hadoop using Cloudera Enterprise - Part 2”
http://www.youtube.com/watch?v=OL2_SYVQVYA&feature=youtube_gdata

Datameer: “What is an Analytical App?”
http://www.datameer.com/blog/big-data-analytics-perspectives/what-is-an-analytic-app.html

Datameer: “Insightful Analytics in a Data Driven World” - Webinar Dec 6th 10.00 am PT /  1 pm ET http://info.datameer.com/Delivering-Insightful-Analytics-in-Data-Driven-World.html

Dell

“Dell and MorphLabs Discuss Private Clouds” by Kamesh Pemmaraju - Personal Blog:
http://www.cloudel.com/dell-and-morphlabs-discuss-private-clouds/?utm_source=rss&utm_medium=rss&utm_campaign=dell-and-morphlabs-discuss-private-clouds

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #5

$
0
0

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. Currently, there are only a few MVPs contributing to this compilation, but more will follow soon. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

Windows 2012 Hyper-V Advanced Network Security : DHCP Guard, Router Guard, Port Mirroring by Alessandro Cardoso

KB281662 – How To Use Windows Server Cluster Nodes As Domain Controllers by Aidan Finn

What Gartner Says Of Windows Server 2012 Hyper-V Versus vSphere 5.1 by Aidan Finn

Videointerview with Didier Van Hoye (MVP Virtual Machine) about Storage Improvements in Windows Server 2012 by Carsten Rachfahl

  Hyper-V

Windows 2012 Hyper-V Advanced Network Security : DHCP Guard, Router Guard, Port Mirroring by Alessandro Cardoso

KB281662 – How To Use Windows Server Cluster Nodes As Domain Controllers by Aidan Finn

Phew! Just Finished Writing The Final Chapter Of The New Windows Server 2012 Hyper-V Book by Aidan Finn

Windows Server 2012 Hyper-V: Virtual Disk VHD & VHDX recommendations by Thomas Maurer

Hyper-V VHDX Format Specification v1.00 by Thomas Maurer

Videointerview with Didier Van Hoye (MVP Virtual Machine) about Storage Improvements in Windows Server 2012 by Carsten Rachfahl

Windows Server Core

What Gartner Says Of Windows Server 2012 Hyper-V Versus vSphere 5.1 by Aidan Finn

Setting up a Geo DNS environment in your test lab – part I by Johan Veldhuis

System Center Configuration Manager

UPDATE : Technical Documentation Download for System Center 2012 – Virtual Machine Manager #SCVMM #sysctr by James van den Berg

PowerShell

Copy and Mount a CD with PowerCLI by Jeffery Hicks

Microsoft Office

Change the Office 2013 Colour Theme by Aidan Finn

Exchange Online (Microsoft Cloud)

Aktualisierung der MX-Einträge in Exchange Online bis zum 01. Januar 2013 in Germany by Kerstin Rachfahl

Events

Microsoft Management Summit 2013 Registration opens on December 3rd, 2012 by Didier van Hoye

 

Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management

Ravikanth Chaganti - MVP for PowerShell

Jan Egil Ring - MVP for PowerShell

Jeffery Hicks - MVP for PowerShell

Marcelo Vighi - MVP for Exchange

Lai Yoong Seng - MVP for Virtual Machine

Rob McShinsky - MVP for Virtual Machine

Hans Vredevoort - MVP for Virtual Machine

Leandro Carvalho - MVP for Virtual Machine

Ramazan Can - MVP for Cluster

Marcelo Sinic - MVP Windows Expert-IT Pro

Ulf B. Simon-Weidner - MVP for Windows Server - Directory Services

No MVP but he should be one

Jeff Wouters - PowerShell

Calling Developers - Dell Ubuntu Laptop Available

$
0
0

Last week, Dell announced availability of a fully-tested and ready to go high performance laptop for open source developers running on Ubuntu 12.04LTS known as Project Sputnik. 

Product Announcement from Barton George, Dell Developer Advocate

(Please visit the site to view this video)

The product known as the XPS 13 Laptop, Developer Edition contains the following features:

  • 3rd Generation Intel Core i7-3517U
  • UBUNTU Linux 12.04
  • 13.3" HD 720-
  • 8 GB DDR2 SDRAM at 1600MHz
  • 256 GB Solid State Drive
  • Intel HD 4000
  • 1 Year ProSupport with 1 Year NBD Limited Onsite Service After Remote Diagnosis
  • 2.99 lbs

For more information and purchase visit the product site.

Microsoft Masterminds Episode 2: Interview with Jan-Philipp Rombolotto MVP Program Manager at Microsoft

$
0
0

Welcome to the second episode of tech talks with outstanding Microsoft community members. Most upcoming interviews will be with Microsoft Most Valuable Professionals (MVPs), and for that reason I interviewed Jan-Phillip Rombolotto who is in charge of the MVP Program in German speaking countries.

Microsoft Most Valuable Professional Mainpage 
www.mvp.microsoft.com

Jan-Philipp Rombolotto
de.linkedin.com/pub/jan-philipp-rombolotto/57/b03/691/en


Flo:  Nice to talk with you, Jan. Can you please introduce yourself?

Jan: Sure, Flo. I am the Microsoft MVP Award Program Manager for Germany, Austria and Switzerland operating out of our office in Munich.

Flo: Tell us a bit about the Microsoft MVP Program.

Jan: The Microsoft MVP Program was established almost 20 years ago. It’s a recognition for independent, exceptionally experienced experts in Microsoft products, who are delivering a lot of value to the IT community by sharing insights, helping others to make better use of our technology . The MVP Award is how we say “Thank you!” to those exceptional people.  Collaborating with independent experts makes a lot of sense for both, the IT community as well as for Microsoft. The IT community gets valuable insights into Microsoft technologies and in return Microsoft gets a very accurate, unfiltered feedback on our products from field.

Flo: How many people are currently awarded as a MVPs globally? Can you also give us some details on the program’s scope? What product categories are you covering?

Jan: Currently we have 4,000 MVPs worldwide, and we won’t grow that number since we have set our quality bar pretty high. We simply want to have the best of and the brightest members of the tech community in our program.

We have MVPs for almost every single product, which adds up to more than 90 product categories.

Flo: I just recently learned a couple of Virtual Machine MVPs in person, it’s really fun to interact with them and their knowledge is amazing. Which additional categories do you have? Are you focusing on enterprise solutions or do you have MVPs for consumer products also?

Jan: You can categorize our MVPs either by products or by their professional scope. As for the professional scope, we do have four categories. These are: Developers, Information Workers, IT Professionals and Consumer MVPs.

Let me explain: We do have a bunch of MVPs focusing on the same product such as Microsoft SharePoint for example. But each of those MVPs might have a different type of knowledge around that product. An IT Professional is focusing on different aspects than a Developer. The IT Professional’s expertise is around deploying, running and maintaining SharePoint, whereas a Developer does obviously have an in depth knowledge of developing for SharePoint.

Flo: How can one become a Microsoft MVP? What are your requirements?

Jan: That’s a very good question. We do not have a certification at Microsoft for the MVP Program, nor do we provide trainings for those who want to become a MVP. It’s a recognition for extraordinarily active and knowledgeable members of the Microsoft tech community.

There is no standardized path to become a Microsoft MVP, and we avoid awarding people who want to become a MVP for the sake of having a prestigious title. We are only giving the MVP Award to people with a strong intrinsic motivation and passion for technology – that’s our core requirement and we are very selective on that.

Flo: How do you become aware of potential candidates for the Microsoft MVP Program?

Jan: As I mentioned earlier, we are looking for people who are passionate about technology, who engage with the IT community without expecting a compensation in return. They have a strong visibility and credibility in the IT community due to various activities such as: speaking engagements at tech events, organizing user groups, writing a blog, sharing code etc.. Lastly, these people proactively drive a sincere feedback to us, which helps us make our products even better.

Flo: How long does someone remain MVP?

Jan: The MVP Award is limited to one year. Of course, we have MVPs who have been awarded for five years in a row, in some cases even 10 or 15 years. It’s a constant evaluation process every MVP has to go through every year.

Flo: So, you can also lose your MVP status?

Jan: Correct. But since all MVPs are passionate about technology and they are active community members, we have just very little fluctuation among MVPs.

Flo: What are the benefits of being an MVP?

Jan: Every MVP is of course under NDA and gets a privileged access to our product groups and information around upcoming products. We have several MVP events in each region which have a networking character … we host one event in Austria, one in Switzerland round two to three in Germany. In addition to that, we have a Global MVP Summit in Redmond every year. It’s the biggest event on the Microsoft campus by the way.

Flo: I know around 15 MVPs in person and they are all very proactive, approachable … they are responding quickly when you reach out to them with technical questions. It’s a pleasure working with each of them.

Jan: I am very happy about MVPs collaborating with Dell people and vice versa.

Flo: What type of companies do MVPs work for?

Jan: Typically for IT companies as you can imagine, form blue-chip level executives over SMB CEOs and CTOs to self-employed, the range is incredible. On the contrary we have also numerous autodidacts who do not work within the IT industry at all. You can imagine that it is fascinating to me managing such a heterogenic group across the regions I am responsible for.

Flo: What’s the value for customers working with Microsoft MVP? How do they benefit?

Jan: MVPs are the best and brightest within the IT community. Their expertise and network is incredible and customers can benefit from that.

Flo: What if a Microsoft tech expert is actively engaging in non-Microsoft communities around Citrix or VMware products? Does he have a chance to get awarded as an MVP?

Jan: Sure, as long as an MVP does not work for a competitor in his field of expertise there is no conflict of interest. In fact we have several MVP who are awarded for example by Citrix for their expertise. Again MVPs are independent experts, and we always emphasize their independence. The only conflict we see is employment at a competitor in the field of expertise as MVP do get lot of NDA information and deep insights into the respective product or service.

Flo: How do companies find MVPs if they want to hire or work with the?

Jan: All MVP worldwide with a public profile can be found on mvp.microsoft.com

Flo: Can a Microsoft employee become an MVP?

Jan: No, MVPs are independent experts and as such a Microsoft employee can’t be awarded as MVP. Even former Microsoft employees must work outside the company for at least a year to be considered for the nomination process.

Flo: What if a Microsoft MVP doesn’t like a certain product feature and publically criticizes your company? How do you respond?

Jan: Again, MVPs are independent experts and as such they can publically criticize our company as well as our products and services. Their credibility is crucial to us, we and our customers can’t learn from hardcore Fanboys who never criticize anything. Of course we have a decent code of conduct. This code of conduct is however linked to the fact that MVPs are rolemodels in the community, aggressive basing or insulting individuals is something we do not tolerate. However, there are hardly any cases on which we to take action worldwide.

Flo: Thank you for the interview, Jan.

Jan: You’re welcome, Flo!

Approaching Tomorrow's Cloud: Twitter Chat with Trend Micro and Dell

$
0
0

Tomorrow December 4th at 11am CST Dell and Trend Micro are hosting a live twitter chat (#TrendChat) to Discuss new approaches to the cloud for businesses and how companies can prepare the next generation of secure cloud computing.

Participants:

Trend Micro is even offering prizes for participating - learn more on their twitter chat blog post.


Advanced Power Management Features of the 12th Generation of Dell PowerEdge Servers

$
0
0

By Jeff Matthews, Sridevi Chandrasekaran, and Jeethendra Telagu of the Dell iDRAC Power team

 

Dell’s new 12th generation PowerEdge servers offer a rich set of new advanced power management features which are designed to help you monitor and manage power consumption, and, in-turn, control costs. The iDRAC7 supports all of the 11th generation power monitoring and reporting capabilities in addition to advanced reporting and management capabilities listed below.

The iDRAC7 provides power monitoring/reporting and configuration feature such as:

  • Power Control – Basic system power control actions
  • Power Monitoring – Real-time and historical power consumption data
    • Tabular data and graphs, in both Watts and Amps
    • Minimum, Maximum, and Average power consumption over time
    • Peak and Instantaneous Headroom data
    • Cumulative power data over time
  •  Power Inventory – Provides minimum and maximum expected consumption
    • Based on actual system component inventory
  • Power Budgeting – Ability to control maximum power consumption of a server
    • Via processor and memory throttling
    • Ability to set time-of-day or real-time consumption based power policies via OMPC
  • Power Supply Management – Information and configuration of power supplies on rack and tower servers
    • Redundancy
    • Hot Spare
    • Power Factor Correction
  • OMPC Integration - Provides one-to-many console to monitor & control of power policies in the data center
  • Multiple user interfaces
    • Most power management features are available through, iDRAC GUI, RACADM, Winrm, OpenWSMAN, IPMI, OMSA, and OMPC

The iDRAC7 maintains and stores cumulative power data and updates this cumulative information every second to provide the most up-to-date information available.  The cumulative power data is presented to the supported interfaces in numerical format and can be saved externally.  This cumulative data can be used to assess the power consumption over time.

The iDRAC7 also monitors Historical Peaks, Historical Trends, System Headroom, and Power Supply readings.  This information can be used to set an Active Power Policy accordingly to conserve power consumption.

In future posts, we’ll expand on each of these features in more detail.

Dell PowerEdge Servers with iDRAC7 Fresh Air Cooling Features

$
0
0

By Pushkala Iyer, Daniel Chappelear, Balamurugan Gnanasam

 

Dell 12th Generation PowerEdge Servers now empower you with more choices regarding cooling infrastructure that allow you to achieve significant operational savings per megawatt (MW) per year. Dell servers, storage units or network switches with Fresh Air capability give you greater flexibility in regulating the operating temperature in data centers, a best practice that can help increase energy efficiency and decrease operational costs by eliminating the need for chillers.

What is Fresh Air Cooling? “Fresh Air” Cooling is an industry initiative to cool the data center by bringing outside air directly inside.

To take advantage of this initiative, the latest 12G Dell Power Edge Servers have been developed for operation at temperature ranges from -23⁰F (5⁰C) to 113⁰F (45⁰C) and humidity from 5% to 90%. Dell Servers with Fresh Air technology can tolerate up to 900 hours of 104°F (40°C) operation per year and up to 90 hours at 113°F (45°C).

When a system is in “Fresh Air” environment, it is expected that there will be times at which the system may be operating above the normal ambient operating range.  Systems and configurations that support this mode of operation are known as “Fresh Air Compliant”.

On 12th Generation servers, Dell makes it easy to do the following:

  • Check if the server is “Fresh Air Compliant”: This can be done via the iDRAC7 UI, RACADM or WSMAN interfaces.
  • Monitor the server’s operating time in an environment above normal thresholds: There are two separate temperature ranges as can be seen in the screen shot below :

a)      warning range (area above the yellow line)

b)      critical range (area above by the red line)

  • Receive Events / Notification when Warning / Critical Thresholds are crossed: iDRAC sends SNMP and/or email events when a server is approaching or has exceeded the acceptable amount of time operating in the warning or critical bands.  For example, if the server spent more than 10% of the time above the warning level or more than 1% of the time above the critical level during a year , appropriate events are sent out.
  • View the Thermal History of 1 year on iDRAC GUI/CLI:  The iDRAC user interface can display historical thermal data by the year, month or day and it can indicate how much time was spent above the 10% and 1% operating levels during these periods. The GUI also shows the average and peak temperature data as well.  The duration of time spent above warning or critical thresholds can also be retrieved using RACADM. Below is a screenshot of the iDRAC GUI of thermal history for the last day:

 

  • Export the Thermal History for up to 7 years: The entire thermal history for a period of 7 years can be exported to XML and CSV formats from the iDRAC UI, RACADM and WSMAN.

 

Related links:

Quick Steps For Updating System Software (BIOS & Other Firmware) Along With Linux OS Installation Using Kickstart File

$
0
0

This blog post is authored by Ravi A from Dell Change Management team.

A Dell system uses system software (BIOS and other firmware such as RAID controller firmware, PCIe-SSD firmware, Network Card Interface (NIC) firmware etc.) and required Linux operating system. General practice is to first install required Linux operating system and later perform the system software inventory and/or updates. Here we are providing a one stop solution for installing required Linux operating system followed by inventory and update of system software. In this way, we end up with using a Dell system with required Linux OS installed and also with latest system software (BIOS and firmware) in use.

Note: Dell provides software products such as "Dell Deployment Tool Kit", "Dell System Build Update Utility", and so on that use "Dell Update Packages (DUPs)" for updating system software on Dell systems. We don't use them here.

Here we are considering installation of RHEL v6.x on a Dell system using an example Kickstart file that is shown below to achieve above mentioned one stop solution. This example Kickstart file uses "Dell Linux Online Repository" and other RHEL v6.x specific installation configuration details.

"Dell Linux Online Repository" and "Dell Update Packages"
--------------------------------------------------------------------
Dell provides "Dell Update Packages (DUPs)" for BIOS and system software updates to use with Dell systems. We can also make system software updates using "Dell Linux Online Repository".

Dell Linux Online Repository URL:  http://linux.dell.com/repo/hardware/latest/
Dell Update Packages User Guides/Manuals URL:  http://support.dell.com/support/edocs/software/smdup/

User Guides/Manuals provide more information about "Dell Linux Online Repository" and other DUP details. "Dell Linux Online Repository" contains Dell OMSA, drivers, BIOS and firmware updates. It is possible to create local mirror site equivalent to "Dell Linux Online Repository".

Example Kickstart File
--------------------------

    Comments are included in the example Kickstart file to provide more information to the readers. This example Kickstart file uses "Dell Linux Online Repository" to complete required BIOS and other firmware inventory and updates. The system automatically reboots if we use the following Kickstart file to install RHEL v6.x that includes post install BIOS and other firmware inventory/updates also. Readers may need to edit/comment/uncomment contents of this example Kickstart file to suit their requirements. The following is sequence of actions that occur during the RHEL v6.x installation using the following Kickstart file. All these actions do not require user intervention.

     1) Anakonda starts and complete installation as per configuration details explained in the Kickstart file
    2) Set Up/Bootstrapping "Dell Linux Online Repository"
    3) Install required firmware tools/packages from "Dell Linux Online Repository"
    4) System automatically downloads applicable firmware from "Dell Linux Online Repository"
    5) Do an "Inventory" check on the system (creating inventory XML file, that is /var/log/pre_update_inventory.xml) to know the available/installed BIOS and other firmware versions
    6) Update both BIOS and other firmware on the system to their latest available versions on "Dell Linux Online Repository"
    7) Update /etc/rc.local file to complete the following (post reboot):
       7.1) Inventory again (second time) to confirm completed updates. Second time inventory will be /var/log/post_update_inventory.xml
       7.2) Uninstall (clean) installaed firmware tools/packages, delete downloaded firmware, and so on
       7.3) Undo updates done to /etc/rc.local file
    8) Reboot host to boot to OS
    9) Complete pending firmware updates, if any.
    10) Boot to OS and autoexecute /etc/rc.local
    11) Wait for login

### BEGIN: EXAMPLE KICKSTART FILE   ###
# 1) This example kickstart file written by Change Management Team, Dell Bangalore.
# 2) All the comments are numbered to help readers to understand "EXAMPLE KICKSTART FILE"
# 3) Assumptions:  Internet access is available to access "Dell Linux Online Repository"

# 4) Location to pick up the required "RHEL v6.x Installation" media contents over network is http://192.168.0.8/RHEL_v6/ 
#    or using NFS location is
url --url http://192.168.0.8/RHEL_v6/
#nfs --server=192.168.0.8 --dir=/home/ravi_a/RHEL_v6/ --opts=""

# 5) Language and Keyboard selection: English USA
lang en_US.UTF-8
keyboard us

# 6) Set Run level to X
graphical

firstboot --disable

# 7) Enable first NIC to receive its IP from DHCP server
# 8) DHCP server should provide all required details/information to access Internet for accessing "Dell Linux Online Repository"
network --onboot yes --device em1 --bootproto dhcp --noipv6

# 9) Enctrypted root password
rootpw --iscrypted $1$eP310tks$XUW2GVnj/IlTtG1H84RIe1

# 10) Enable SSH with Firewall
firewall --service=ssh

# 11) Authetication (Login using shadow password(s))
auth  --useshadow  --passalgo=sha512

# 12) Enforcing SELinux
selinux --enforcing

# 13) Set time zone to Asia/Kolkata
timezone --utc Asia/Kolkata

# 14) Update MBR and install bootloader after clearing/cleaning existing contents on the disk
bootloader --location=mbr
zerombr
clearpart --all --initlabel

# 15) Create two primary partitions: SWAP partition and ROOT partition
part swap --fstype="swap" --size=4096
part / --asprimary --fstype="ext4" --grow --size=1

# 16) Logging any useful information to debug issues/problems related to RHEL v6.x installation using this kickstart file by Anaconda
logging --level=info

# 17) Don't have "RHEL Installation Key", have to skip
key --skip

# 18) Reboot (Post Installation)
reboot yes

# 19) Nothing to complete during preinstall stage
%pre --interpreter=/usr/bin/python
%end

# 20) Complete both inventory and/or update both BIOS and system software (FM) using shell commands
%post --interpreter=/bin/bash

# 21) Just to make sure that network is up for acessing "Dell Online Repository" over network
/sbin/service NetworkManager restart;
/sbin/service network restart;

# 22) Setting Up/Bootstrapping "Dell Linux Online Repository"
#     We can also use a URL to local mirror of "Dell Linux Online Repository", if available
wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash;

# 23) Check return value of above wget command to confirm everything is ok to proceed further
ret_val=$(echo $?);

if [ "$ret_val" = "0" ]; then
   # 24) Install required firmware tools
   yum -y install dell_ft_install;

   # 25) Check return value of above yum command to confirm everything is ok to proceed further
   ret_val=$(echo $?);
   if [ "$ret_val" = "0" ]; then
      # 26) Downloading applicable firmware
      #     Bootstrap firmware is a process where the latest BIOS or Firmware
      #     update RPMs for the system are downloaded from the repository, along
      #     with the utilities necessary to inventory and apply updates on the system
      yum -y install $(bootstrap_firmware);

      # 27) Check return value of above yum command to confirm everything is ok to proceed further
      ret_val=$(echo $?);
      if [ "$ret_val" = "0" ]; then
         # 28) Completing "Inventory" of the system to know available/installed BIOS and other firmware versions.
         #     File /var/log/pre_update_inventory.xml contains current BIOS and other firmware details before going for the updates as shown below.
         inventory_firmware > /var/log/pre_update_inventory.xml;

         # 29) Check return value of above inventory_firmware command to confirm everything is ok to proceed further
         ret_val=$(echo $?);
         if [ "$ret_val" = "0" ]; then
            echo "*** NOTE: System Inventory Went Through Successfully ...";
         else
            echo "*** ERROR: System Inventory Failed ...";
         fi

         # 30) Update BIOS and other firmware on the system to their
         #     latest available versions on "Dell Linux Online Repository"
         update_firmware --yes;

         # 31) Check return value of above update_firmware command to confirm everything is ok or not
         ret_val=$(echo $?);
         if [ "$ret_val" = "0" ]; then
            echo "*** NOTE: BIOS & Firmware Updates Went Through Successfully ...";
         else
            echo "*** ERROR: BIOS and/or Firmware Updates Failed. Please check /var/log/firmware-updates.log to know more details.";
         fi

         # 32) The above two yum commands installed many RPMs. Here we update /etc/rc.local to take care of inventory post updates inventory and cleaning RPMs.
         # 33) Updates to /etc/rc.local and /var/log/post_update_inventory.xml is post updates inventory
         # 34) We append all required commands to /etc/rc.local to complete post updates inventory and cleaning RPMs in a "single line", that is last line in /etc/rc.local. Later /etc/rc.local will be updated not to include this "single line".
         echo "inventory_firmware > /var/log/post_update_inventory.xml; yum -y remove dell_ie_*; yum -y remove \$(bootstrap_firmware); yum -y erase \$(rpm -qa | grep srvadmin); rpm -e \$(rpm -qa | grep dell); yum clean all; rm -rf /var/cache/yum/*; rm -f /etc/yum.repos.d/dell-omsa-repository.repo; cp -v /etc/rc.local /tmp/rc.local; grep -v \"inventory_firmware > /var/log/pre_update_inventory.xml;\" /tmp/rc.local > /etc/rc.local;" >> /etc/rc.local;
      else
         echo "*** ERROR: Applicable Firmware Download Failed ..";
      fi
   else
      echo "*** ERROR: Required Firmware Tools Installation Failed ..";
   fi
else
   echo "*** ERROR: Setting Up/Bootstrapping \"Dell Linux Online Repository\" Failed ...";
fi

echo; echo "NOTE: It is suggested to reboot the system. System will be auto-rebooted ...!!";

%end

# 35) List of packages/package groups to be installed
%packages --ignoremissing
@additional-devel
@base
@client-mgmt-tools
@core
@debugging
@basic-desktop
@desktop-debugging
@desktop-platform
@desktop-platform-devel
@development
@directory-client
@eclipse
@emacs
@fonts
@general-desktop
@graphical-admin-tools
@graphics
@input-methods
@internet-browser
@java-platform
@legacy-x
@network-file-system-client
@performance
@perl-runtime
@print-client
@remote-desktop-clients
@server-platform
@server-platform-devel
@server-policy
@tex
@technical-writing
@virtualization
@virtualization-client
@virtualization-platform
@x11
-libXinerama-devel
-openmotif-devel
-libXmu-devel
xorg-x11-proto-devel
-startup-notification-devel
-libgnomeui-devel
-libbonobo-devel
-junit
libXau-devel
-libgcrypt-devel
-popt-devel
libdrm-devel
-libXrandr-devel
-libxslt-devel
-libglade2-devel
gnutls-devel
-pax
-python-dmidecode
-oddjob
-wodim
-sgpio
-genisoimage
mtools
systemtap-client
-abrt-gui
desktop-file-utils
-ant
rpmdevtools
-jpackage-utils
rpmlint
-certmonger
pam_krb5
-krb5-workstation
-netpbm-progs
openmotif
libXmu
libXp
-perl-DBD-SQLite
-libvirt-java
ipmitool
OpenIPMI-libs
OpenIPMI
%end

###  END: EXAMPLE KICKSTART FILE   ###

Summary (Post log-in)
    1) We can check /var/log/pre_update_inventory.xml and /var/log/post_update_inventory.xml files that provide pre and post "BIOS and firmware updates" inventory details. Above explained example Kickstart file contents and the log file of its execution are stored in the /tmp directory to assist with debugging installation and system software updates' details. So please check these log files in /tmp/ directory for debugging.
    2) The file /var/log/firmware-updates.log provides more details about system software updates' that were applied to the system using the explained example Kickstart file.
    3) The example Kickstart file is really useful to deal with many physical servers that require RHEL v6.x OS installation followed by latest system software (BIOS and other firmware) inventory and/or updates.
    4) Currently we can also achieve the explained one stop solution with other RPM based Linux distributions also.

DISCLAIMER
    This document/article is provided for informational purposes only. The author and Dell give no warranties, either expressed or implied, in this document. All the information in this document/article, including URLs and other Internet Web site references, is subject to change without notice. The complete risk of the use or the results from the use of this document remains with the user. The names of actual companies and products mentioned herein may be thetrademarks of their respective owners.

OpenStack Board of Directors Talks: Episode 1 with Rob Hirschfeld, Principal Cloud Architect at Dell

$
0
0

Learn firsthand about OpenStack, its challenges and opportunities, market adoption and Dell’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!


#1 Takeaway: Dell’s interest in OpenStack derives from market pressure on finding open source alternatives to Amazon and VMware

Rafael: Dell is one of the very early contributors to OpenStack. Why is Dell engaging in this project?

Rob: We were involved with Rackspace and NASA before there was public interest in OpenStack. Our involvement in open source cloud goes back even much further than that, because we were part of market dynamics pre-OpenStack. We were struggling to find open source cloud alternatives for our customers with a common API, a community, support and with momentum.  There was a lot of pressure in the market to have an alternative ecosystem to Amazon as a public cloud and to VMware as a licensed internal cloud. So our team was looking for something in that space, and OpenStack appealed to us because it was open source, and it was a community driven effort with a group of people supporting it. So far the consortium of people involved in OpenStack has by far exceeded what I was expecting to be able to accomplish.

The open source clouds that were on the market at that time, Eucalyptus and Cloud.com, were both really interesting alternatives but neither of them had the type of collaborative community that was the goal of OpenStack. That was REALLY the difference. OpenStack started on day zero as a community of collaboration, and there wasn’t one company that was maintaining the code base. That really differentiates the project.

#2 Takeaway: Dell is focusing on operationability of OpenStack for customers rather than on code contributions to the community

Rafael: How does Dell contribute to OpenStack?

Rob: At Dell we knew from hard experience in field that the number one challenge for customers is going to be using OpenStack. Whether it worked or not, installing it, getting a repeatable build, testing and using was going to be a significant challenge that our customers would face.

We made a very deliberate choice, that rather than trying to contribute code to OpenStack, we wanted to tackle reference deployments. Our goal was to make OpenStack as deployable as possible. So, we have been mainly operationally focused on OpenStack in helping our customers and the community to get OpenStack installed.

Moving forward, we will foster OpenStack integration into the Dell portfolio and we will bring in more partners from the open source community. We are going to expand our footprint with Dell Crowbar around making OpenStack more deployable, upgradable and available.

In brief: Dell’s interest in OpenStack has been very pragmatic. OpenStack is something we really see a market need for.

#3 Takeaway: Dell Crowbar is an OpenStack deployment mechanism targeted at experienced high scale customers

Rafael: Let’s talk a bit about Dell Crowbar, your team’s deployment mechanism for OpenStack. I often times hear from the OpenStack community that it’s considered a solution targeted at the experienced admin, whereas most companies participating in the OpenStack game are building solutions that are supposed to replace that person. Why did Dell pick that route?

Rob: That’s an excellent question. Dell Crowbar is definitely a DevOps tool for customers who are planning to build the experienced run with OpenStack. The goal for Dell Crowbar is to get OpenStack running without having to read the manuals though, but it’s not a managed appliance. Dell has partners like Morphlabs who are selling a managed appliance. There are great companies out there like Piston Cloud and Nebula who are very focused on the managed appliance aspect. I believe there is plenty of room in the market for that type of solutions.

My team is focused on high scale customers. They don’t want somebody to take over their operations. They want somebody to accelerate their operations. That’s why we’re working with Chef, that’s why we are adding Puppet capabilities into Dell Crowbar.  Over time, Dell Crowbar is going to be appliance-like, more and more easy to use, but our strategy is to work with a very targeted market of early adopters, open source experienced customers and to learn jointly about OpenStack best practices.

This is what I truly enjoy about open source … those communities know SO MUCH and it’s fun to have a pure relationship with them, to trade information among equals. Some of those community members are even more in field than I am, and if I switch into the “I am doing it for you!” mode, it changes the relationship.

#4 Takeaway: OpenStack raw is used by sophisticated community members, whereas OpenStack distributions target less experienced customers seeking for a managed appliance  

Rafael: Let’s talk a bit about OpenStack raw vs. OpenStack distributions. What is Dell’s offering around both type of solutions?

Rob: OpenStack distributions are a bit tricky because of the way customers interpret this subject.

Dell actually supports a number of distributions and we will expand our offering in that area. For example, SUSE is using Dell Crowbar as the deployment mechanism for their OpenStack distribution. Currently our shipping products use Canonical’s distribution, our Hadoop solution uses Red Hat and CentOS.

To me, OpenStack distributions are about support. Who the customer can call to get a bug fixed?

As for the OpenStack raw part: We have a new feature in the open source release of Dell Crowbar that we call Pull From Source (PFS). I am really excited about that feature because it addresses a new market: That feature lets you pull OpenStack raw … not necessarily off trunk … we expect customers will set up their own clone off trunk and then pull from that. It allows a customer who is in dev mode and working on a new feature to test it against their own deployment, to create a real test infrastructure with multi-nodes and real workloads.

We consider this a very important use case for addressing deployability in pre-release. We expect our high scale customers  who are a little bit ahead of trunk … they fix bugs, tweak and tune things … to use techniques like PFS.

During the recent OpenStack Summit Rackspace made a big point that their cloud runs on OpenStack pretty much off trunk, not on a distribution. They are pulling source code in and people are testing it, using it, fixing bugs and pushing code back. That’s exactly the type of vibrant community we want to see.

At the same time, there is a growing community that wants to use OpenStack distributions with support, certifications and they are fine with being 6 months behind OpenStack off trunk. That’s good, and we want that shadow, we want that combination of pure minded early adopters and less sophisticated OpenStack users both working together.

#5 Takeaway: OpenStack upgradability is currently the biggest barrier to adoption

Rafael: What are the biggest barriers to OpenStack adoption as of now?

Rob: That’s an excellent question. The most significant barrier to adoption is the release cycle. It delays customers starting with OpenStack because the next release is always just round the corner. OpenStack has a very fast release train.

As a community, we haven’t invested enough yet in upgrade strategies. My team’s top priority is addressing this issue. That’s what we hear in every customer meeting: “You must give me a way … if I take OpenStack Essex today, to get to OpenStack Folsom and from there to OpenStack Grizzly.” Customers want the innovation, but they want to be able to transit easily from one release to another.

On the other hand, things that I thought might be difficult turned out to be easy. For example, Linux and KVM have not been a barrier for customers, just as switching to a new API. At Dell we hit a barrier for how fast we can support new hardware versions. A lot of our early adopters like to get new hardware as fast as they possible can, and sometimes hardware shows up in a customer site before it does in our lab for testing. (laughs)

Rafael: Let me dwell a bit more on upgrading OpenStack, Rob. What does a customer specifically need to do when moving from OpenStack Essex to Folsom for example? Does he need to deploy from scratch or is there an easy upgrade path? What are the issues that a customer might face in that process?

Rob: There is now guidance in the community around that process … but it is a manual process. Our advice so far has been to do a new deployment. We’ve worked hard to make OpenStack deployment with Dell Crowbar as fast as possible, but at the same time it’s not a very good answer. We’re working on it. What we found is that most of our customers who are deploying OpenStack Essex are doing it as pilots and proofs of concept and upgrading OpenStack hasn’t been that much of a burden.

We do have customers who did much more significant investments in production deployment around OpenStack Essex and even in the previous release Diablo. They are trying to figure out the best way to do migrations. My advice to customers is always to use DevOps tools for cloud deployments, to invest time into automation to migrate your applications more easily.

#6 Takeaway:  Most customers do OpenStack proof of concepts. Only very few use OpenStack in production

Rafael: Let’s talk about proof of concept versus production, Rob. How are customers using OpenStack and can you give examples for both scenarios?

Rob: Most of our customers do proof of concepts and pilots. They are putting OpenStack through dev test cycles, they are doing dev workloads. In some cases customers are going with OpenStack into production. In that case they are not using our tool, because Dell Crowbar doesn’t do automatic production deployment yet … that’s something we’re working towards. Anybody who is doing OpenStack in production is building up operational expertise. Most of those customers are OpenStack contributors, they are very active in the community. That’s what it takes today to run OpenStack in production: a certain level of commitment to the OpenStack community.

AT&T is on top of my mind as well as Dreamhost. Both of those are hosting examples. I just recently talked to a communications company that is doing a production deployment for one of their media apps, I know of a financial institution that is running a big data production cluster. I know of very few companies running real OpenStack production clusters, most of them are trying out things right now. Even Rackspace is operating OpenStack in dual mode. Customers can opt in and out. But we will get there.

#7 Takeaway: OpenStack is meant to be an Amazon and VMware alternative, for different reasons though

Rafael: I oftentimes hear two different statements: “Open Stack is an alternative to Amazon.” The other is: “OpenStack is an alternative to VMware … maybe, hopefully in two or three years from now.”  Which of both statements is true?

Rob: I think that both statements are accurate, but for different reasons. OpenStack is an alternative to Amazon for public cloud companies like Rackspace, Dreamhost and many others. They want to drive an ecosystem, where they can create a cloud market which is not Amazon’ objective. Their objective is to be THE cloud.  For customers who want choice and portability, OpenStack makes a lot of sense, as well as for providers who want to differentiate on service.

On the VMware side customers I talk to do want alternatives to licensed software, where costs do not scale linearly as they grow their infrastructure. They want the transparency of open source, they want flexibility.  Those are real concerns for our customers, and they are willing to trade license cost for needing more operational knowledge. Over time their need to know how to run OpenStack operationally will decrease whereas licensing costs would scale linearly.

Having that alternative with OpenStack will definitely help customers and it will drive response both from Amazon and VMware.

#8 Takeway: VMware joining OpenStack is beneficial for the community, rather than a threat

Rafael: How do you view VMware joining OpenStack. Is it a threat to OpenStack or does VMware add value to the project?

Rob: That’s a really loaded question. (laughs) You have to split that question into pieces. From the Nicira acquisition side, which helps VMware coming into OpenStack as a major contributor: Nicira is doing lot of very important work, they are helping drive Quantum forward.

I think VMware has lot to offer in the community and their presence is good. I am not afraid of them coming in and then trying to establish walls and make things proprietary and disrupt the community.  There is no indication right now that we need to protect OpenStack from VMware.  They are showing leadership in OpenStack and frankly they have a lot of open source projects, which they are managing very robustly. These fears are overblown, and if someone tries to jockey OpenStack, the community pushes back pretty effectively.

#9 Takeaway: OpenStack has a very broad range of early adopters. But mass market adoption won’t happen until upgradability issues get resolved

Rafael: Let us speak about market adoption. Who are the early adopters of OpenStack? And when do you expect OpenStack to hit the tipping point for mass market adoption?

Rob: The early adoption is much more distributed than we have thought. Hosting companies definitely jumped in fast, because they have the operational ability and the financial incentives are very well lined. But I have also seen educational groups coming in, government entities and financial institutions which are known for aggressively adopting new technologies. Some of them are already comfortable with Hadoop … and I believe there are going to be more synergies between OpenStack and Hadoop, we haven’t seen them emerge yet … although I am little biased because my team does both. (laughs) Definitely social media, online retail, gaming companies are amongst early adopters … every single of those segments is already operationally sophisticated. By and large those companies I talk with are already strongly in Linux, DevOps and they are not afraid to get involved in these technologies. Unfortunately many companies I talk with are not in the position where their corporate culture allows them to contribute back to open source. So there is a bit of a shadow effect of companies using open stack but they don’t show up in the community.

We are not going to cross the chasm into mainstream until we solve the upgrade problem. That’s not surprising, but we are working very hard to help customers make adoption easier.

#10 Takeaway: Dell is in the process of building an entire ecosystem of OpenStack solutions for various types of customers

Rafael Rob, for all those interested in Dell’s commercial offering around OpenStack … can you give a brief overview? We already discussed about Dell Crowbar, the deployment mechanism for OpenStack as well as about some OpenStack distributions Dell is supporting. What else do we have in our portfolio?

Rob: We have a very clear vision of what we want to do in OpenStack, and it involves building up what we consider a full taxonomy. We believe there is a need for orchestration at the API level with partners like enStratus. There is need for automated deployment upgrade … what we want to build is a full experience.

We’ve done this to a certain extent with our solution offering and expanding with partners and different capabilities. One of the things we are trying to do is to start with a free open source option and then, as customers get into larger deployments, to bring in license capabilities as well as further free options with support capabilities around them. Dell has already a lot of assets around storage, networking and compute … software products, cloud services. We have a very broad view of enabling the ecosystem.

Rafael: I am in the process of setting up a wiki page at Dell TechCenter that provides customers an overview over our OpenStack offering: Dell Crowbar as our DevOps tool in its various shapes and forms, OpenStack distros we support … cloud services we build around OpenStack … hardware capabilities optimized for OpenStack.

Rob: You just summed up exactly what our challenge is at Dell. We are working with different partners to bring OpenStack to different customers in different ways. It is confusing. Your question about Dell Crowbar was right … it is targeted at a certain class of users, and I don’t want enterprise customers who expect a lot of shiny chrome and zero touch. That’s not the target by now for Dell Crowbar. We definitely need that sort of magic decoder page to help customers understand our commercial offering.

#11 Takeaway: The main challenge for the OpenStack board is defining OpenStack in detail

Rafael: My final question to you Rob is: What are the challenges for the OpenStack Board of Directors?

Rob: There are some really weighty issues in front of the board for this coming year. People should be aware of them because they really shape OpenStack.

We just formed up the Technical Board, we have got the User Committee, and we can actually start discussing the most significant bundle of issues: “What is the core of Open Stack? What defines OpenStack? What makes OpenStack … OpenStack?” For example, a project like Swift, which is an important, widely used part of OpenStack: Is it a required core component of OpenStack? Or is it an optional component?

That is a serious question because the way the OpenStack board defines what’s core and non-core, helps people bringing in new projects understand where they fit in. That also changes the incubation definition. If it’s core, it is going to have a certain incubation pattern. If it’s not, it might have a different incubation option.

That leads us straight into the question: “Is OpenStack an API? Or is OpenStack an implementation?” Today most people don’t realize that it’s really an implementation. If OpenStack is the API, then how do we provide a fitness test to make sure that someone’s implementation of OpenStack complies? We have a lot of these questions around Swift but also in the Nova, Cinder and Quantum pieces. If someone wants to replace the implementation of Swift, which is about how objects are actually stored: How do they know that they comply with the Swift API, so that they can be certified in OpenStack Swift? … even if they didn’t use the RSync implementation of Swift?

The same would be true if you reimplemented the Nova API but back ended it with VMware vCloud … is that OpenStack or is it not?

Defining API is really important. At the same time OpenStack progressed really quickly because we haven’t gone through committees and API use negotiations … and that worked. (laughs) But part of OpenStack’s maturity is to define API and the Board of Directors has to find the resources to back these decisions. It also has to find resources in the community to provide API testing.

The other thing on technical side is that we need to drive upgrade into the code.  It’s a really a significant challenge to make sure that upgradeability is a key factor in the development cycle. We need to drive much more operational testing of OpenStack releases earlier in the cycle instead of waiting until the release is done. That’s not a good pattern, we need to do it earlier.

It’s important to understand what very pragmatically the challenges around OpenStack are.

Rafael: Rob, thank you very much for taking time. Be sure I will come back soon with even more questions.

Rob: (laughs) You’re welcome!

Resources

Rob Hirschfeld

Personal Blog: http://robhirschfeld.com/ 

Personal Youtube Channel: http://www.youtube.com/user/rollingwoodcitizen

Twitter: https://twitter.com/zehicle

OpenStack

OpenStack Foundation: http://www.openstack.org/

OpenStack at Dell: http://dell.com/openstack

Dell Crowbar: https://github.com/dellcloudedge/crowbar

Feedback

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com

Come Be Social...In Person

$
0
0

Everyone knows that the Dell TechCenter crew loves to mingle with our fellow geeks!  So I’m proud to announce that we will be joining forces with Emulex, Brocade, and Dell PowerVault  MD3 Storage to bring you the “Better Together” tweetup next Wednesday at Bar96 in Austin (96 Rainey Street) from 6-11 p.m. 

Please come out and chat with us about all things tech, dell world, and anything else.  If you happen to snag a picture with one the Dell TechCenter crew, be sure to tweet it out with the #DellWorld hashtag and I might have a prize for you!  Be sure to register here, and share with your friends!  If you have any issues getting in, feel free to shoot me a tweet @DennisMSmith Hope to see you there.

For the original invite please see this blog on the DellWorld.com site. If you want to check out all Dell World 2012-related posts, check out the Dell World blog posts link.

Dell Open Source Ecosystem Digest: OpenStack, Hadoop & More 6-2012

$
0
0

This week’s OpenStack highlight: “CERN aims to ramp to 15k hosts with 100k to 300k VMs by 2015” presentation by Tim Bell at Swiss OpenStack User Group. You can view the presentation at Slideshare.

Enjoy reading the sixth issue of the Dell Open Source Ecosystem Digest, a collection of relevant technical content around OpenStack and Hadoop authored by Dell as well our partners and friends. Your feedback is appreciated greatly!

PS. I also invite you to read the first episode of my OpenStack Board of Director Talk Series which I launched yesterday. My first interviewee is Rob Hirschfeld, Principal Cloud Architect at Dell. I feel very happy about Rob’s thoughts on that interview, which he posted on his personal blog:

“This is not puff interview – We spent an hour together and Rafael did not shy away from asking hard questions like “Why did Dell jump into OpenStack?” and “is VMware a threat to OpenStack?”  [...] There is some real meat in these answers about OpenStack, Dell, Crowbar and challenges facing the project. WARNING: My job is engineering, not marketing. You may find my answers (which are MY OWN ANSWERS) to be more direct that you are expecting. If you find yourself needing additional circumlocution then simply close your browser and move on.”

OpenStack

Mirantis: “Webinar 12/12/12: Building out Storage as a Service with OpenStack Cloud” by David Fishman http://www.mirantis.com/blog/webinar-12-december-2012-building-out-storage-as-a-service-with-openstack-cloud/

Mirantis: “Using an orchestrator to improve Puppet-driven deployment of OpenStack” by Oleg Gelbukh http://www.mirantis.com/blog/orchestrator-puppet-driven-deployment-openstack/

OpenStack Foundation: “1st Swiss OpenStack User Group”
http://www.openstack.org/blog/2012/12/1st-swiss-openstack-user-group/

OpenStack Foundation: “OpenStack at CERN” by Tim Bell at Swiss OpenStack User Group meeting http://de.slideshare.net/noggin143/20121115-open-stackchusergroupv12

OpenStack Foundation: “Announcing OpenStack Day 15 December, Bangalore India” http://www.openstack.org/blog/2012/12/announcing-openstack-day-15-december-bangalore-india/

OpenStack Foundation: “Vietnam OpenStack Community launch 30/9/2012 – report by Hang Tran” http://www.openstack.org/blog/2012/11/vietnam-openstack-community-launch-3092012/

OpenStack Foundation: “OpenStack Outreach Program for Women Accepting Candidates” by Stefano Maffulli http://www.openstack.org/blog/2012/11/openstack-outreach-program-for-women-accepting-candidates/

OpenStack Foundation: “OpenStack Nova, Glance, Keystone, Cinder, Quantum and Horizon 2012.2.1 released” http://lists.openstack.org/pipermail/openstack-announce/2012-November/000057.html

OpenStack Foundation: “What to expect from Grizzly-1 milestone” by Thierry Carrez
http://fnords.wordpress.com/2012/11/23/what-to-expect-from-grizzly-1-milestone/

Piston Cloud: “A Letter from Jim Morrisroe” by Jim Morrisroe
http://www.pistoncloud.com/2012/12/a-letter-from-jim-morrisroe/?utm_source=rss&utm_medium=rss&utm_campaign=a-letter-from-jim-morrisroe

Hadoop

Datameer: “Win $1000 in Datameer’s Analytic App Building Contest” by Rich Taylor http://www.datameer.com/blog/announcements/win-1000-datameer-analytic-app-building-contest.html

Pentaho: “Impala – A New Era for BI on Hadoop” http://blog.pentaho.com/2012/11/30/impala-a-new-era-for-bi-on-hadoop/

Dell

“OpenStack Board of Directors Talks: Episode 1 with Rob Hirschfeld, Principal Cloud Architect at Dell” by Rafael Knuth at Dell TechCenter
http://en.community.dell.com/techcenter/b/techcenter/archive/2012/12/06/openstack-board-of-director-talks-episode-1-with-rob-hirschfeld-principal-cloud-architect-at-dell.aspx

“Quickly and Easily Build Your Own Private Cloud with Dell and SUSE” by Tui Leauanae at Inside Enterprise IT
http://en.community.dell.com/dell-blogs/enterprise/b/inside-enterprise-it/archive/2012/12/04/quickly-and-easily-build-your-own-private-cloud-with-dell-and-suse.aspx

“Calling Developers - Dell Ubuntu Laptop Available” by Stephen Spector at Dell TechCenter http://en.community.dell.com/techcenter/b/techcenter/archive/2012/12/03/calling-developers-dell-ubuntu-laptop-available.aspx

“Blast off! Sputnik launches as the Dell XPS 13 Laptop, Developer Edition” by Nnamdi Orakwue at Direct2Dell http://en.community.dell.com/dell-blogs/direct2dell/b/direct2dell/archive/2012/11/29/blast-off-sputnik-launches-as-the-dell-xps-13-laptop-developer-edition.aspx

“Sputnik Pricing Update” by Lionel Menchaca at Direct2Dell http://en.community.dell.com/dell-blogs/direct2dell/b/direct2dell/archive/2012/11/30/sputnik-pricing-update.aspx

“Seeking OpenStack Foundation Board Seat for 2013 (please nominate me)” by Rob Hirschfeld - Personal Blog: http://robhirschfeld.com/2012/11/30/openstack-board2013/

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #6

$
0
0

Hi Community,

here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community.

Currently, there are only a few MVPs contributing to this compilation, but more will follow soon.

@all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach).

Thanks!


Featured Posts of the Week!

Create PowerShell Scripts with a Single Command by Jeffery Hicks

Me Being Interviewed About Windows Server 2012 Hyper-V by Aidan Finn

Checking Host Integration Services Version on all Nodes of A Windows Server 2012 Hyper-V Cluster With PowerShell by Didier van Hoye

Microsoft Virtualisierungs Podcast Folge 25: PowerShell 3.0 by Carsten Rachfahl


Hyper-V

Me Being Interviewed About Windows Server 2012 Hyper-V by Aidan Finn

Presentation: Configuring and Using the New Virtualization Features in Windows Server 2012 by Lai Yoong Seng

Checking Host Integration Services Version on all Nodes of A Windows Server 2012 Hyper-V Cluster With PowerShell by Didier van Hoye

Announcing Winners of the Microsoft Private Cloud Computing Book by Hans Vredevoort

Windows Server Core

Hardware Support Matrix For Windows Server 2012 by Lai Yoong Seng

Presentation:- Setting up Storage Features in Windows Server 2012 by Lai Yoong Seng

Using SMB3.0 in your Private Cloud Fabric by Kristian Nese

System Center Virtual Machine Manager

Microsoft Service Template Explorer Beta Program by Kristian Nese

PowerShell

Microsoft Virtualisierungs Podcast Folge 25: PowerShell 3.0 by Carsten Rachfahl

Create PowerShell Scripts with a Single Command by Jeffery Hicks

Friday Fun – Get Happy Holiday by Jeffery Hicks

Microsoft Office 365

Videointerview mit Martina Grom MVP Office 365 auf den Microsoft Community Open Days 2012 in German by Kerstin Rachfahl 

Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management
Kristian Nese - MVP for System Center Cloud and Datacenter Management
Ravikanth Chaganti - MVP for PowerShell
Jan Egil Ring - MVP for PowerShell
Jeffery Hicks - MVP for PowerShell
Marcelo Vighi - MVP for Exchange
Lai Yoong Seng - MVP for Virtual Machine
Rob McShinsky - MVP for Virtual Machine
Hans Vredevoort - MVP for Virtual Machine
Leandro Carvalho - MVP for Virtual Machine
Didier van Hoye - MVP for Virtual Machine
Aidan Finn - MVP for Virtual Machine
Carsten Rachfahl - MVP for Virtual Machine
Thomas Maurer - MVP for Virtual Machine
Alessandro Cardoso - MVP for Virtual Machine
Ramazan Can - MVP for Cluster
Marcelo Sinic - MVP Windows Expert-IT Pro
Ulf B. Simon-Weidner - MVP for Windows Server - Directory Services
Meinolf Weber - MVP for Windows Server - Directory Services
Kerstin Rachfahl - MVP for Office 365

No MVP but he should be one

Jeff Wouters - PowerShell


Announcing New Blog Series from Dell Cloud Services: How to Better Manage Your Cloud

$
0
0

The recent acquisition of Quest Software by Dell has also delivered the unique and powerful vOPS and vFoglight solutions from VKernel to the end-to-end cloud solution for Dell customers:

This acquisition more or less instantly positions Dell as a leader in the virtualization management space with strong entries in the Performance and Capacity Management segment (vKernel and vFoglight), the Applications Performance Management segment (Foglight), the Cloud Management segment (the existing DynamicOps/Dell VIS Creator relationship), Data Protection through the previously acquired AppAssure assets, and Security through the Quest One and previously acquired SonicWall assets.  – Bernd Harzog, Virtualization Practice  

This new joint blog series is a collaboration between Mattias Sundling, Dell vKernel Evangelist (@msundling) and Stephen Spector, Dell Cloud Evangelist (@SpectorAtDell). The new blogs will provide customers with deep insight into the variety of ways that virtualization configuration and management impacts public and private VMware cloud solutions. The #DellSolves blog platform is the home of this new blog collaboration and we encourage readers to follow that blog for this series and other great service solution information from Dell.

Planned topics for the blog series:

  • Five Capacity Management Challenges for Private Clouds
  • Guessing is Not a Strategy: The Science of Capacity Management
  • Planning for Performance and Capacity in Virtual Environments
  • The Top 20 VMware Performance Metrics You Should Care About
  • Ins and Outs of IOPS
  • Memory Management Metrics for VMware Environments

Of course, we are always interested in topics submitted from our readers so please respond to any of our blogs, contact us via twitter, or email Mattias or Stephen with your requests.


Dell PowerEdge R720 VMmark result tops Cisco, HP and IBM!

$
0
0

This post was written by Waseem Raja, an engineer on the Dell Solutions Performance Anaylsis team

Dell beats the competition once again when it comes to virtualization performance!

The Dell PowerEdge R720 servers achieved a score of 11.39 @ 10 tiles in our latest VMmark 2.1 benchmark result.  This result tops similar Intel Xeon E5-2600 based results from Cisco, HP and IBM, regardless of form factor.

Here’s how the Dell PowerEdge R720 result compares to published results on Cisco UCS, HP ProLiant and IBM servers:

This is the first Dell result to use Violin flash memory arrays as the shared storage for the virtual machines. With these arrays Dell customers can achieve high IOPS with sustained low latencies. 

Each PowerEdge R720 was configured with two Intel Xeon E5-2690 processors and 256 GB of memory. The load-driving clients were also Dell PowerEdge servers and we used Dell PowerConnect switches for the networking. This result showcases how leveraging various Dell components along with optimized storage can provide breakthrough solutions for customers.

VMmark 2.1 is an industry standard benchmark used to measure the performance and scalability of a virtualized environment. It constitutes application-level workloads along with platform-level workloads (vMotion, Storage vMotion and Deploy operation) for a realistic measure of virtualization performance. 

VMware® VMmark® is a product of VMware, Inc. Comparsion above based on Dell PowerEdge R720 result of 11.39 @ 10 tiles submitted to VMware as of 12/11/2012 compared to best published 32-core VMmark 2.1.1 results from Cisco, Hewlett-Packard, and IBM published as of 12/10/2012.

Dell PowerEdge 12G Blades provide big industry-standard HPC benchmarks in a dense form factor

$
0
0

This post was written by Burton Finley, an engineer on the Dell Solutions Performance Anaylsis team

Do you require the highest performing blade solution for HPC workloads?  The Dell PowerEdge M420 offers exceptional density while delivering up to 76% more performance per U compared to traditional half height blade solutions.  The M420 offers Up to 32 servers with 64 sockets in just 10U of rack space, or as much as 2048 cores in a single 42U rack!

As seen in the below chart the Dell PowerEdge M420 offers a 17% performance increase over the next fastest performing cluster utilizing 512 compute cores.  All of this is done in about ¼ of the rack space.

The Dell PowerEdge M620 boasts the highest SPEC MPI 2007 Large and Medium results for 256 compute cores submitted to SPEC.org.  The M620 is more than 30% faster than the next fastest 256 compute core competitor submission on SPEC.org for Large workloads.  Large disclosures can be found here and medium here.

SGI generically claims “World Record Cluster Performance” on their SGI ICE X web page, yet lack the SPEC MPI 2007 results to back it up.

SPEC MPI2007 stresses performance of compute intensive applications using the Message-Passing Interface (MPI).  The result is a benchmark that highlights the performance of the following cluster attributes:

  • CPU architecture
  • CPU quantity
  • MPI library
  • Communication interconnect
  • Memory architecture
  • Compilers
  • Shared file system

For additional performance analysis of the M420 and M620 blades, see  Performance Analysis of HPC Applications Across Different Server Architectures.

SPEC and the benchmark name SPEC MPI are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of December 10, 2012.

BIOS Performance and Power Guidelines for Dell PowerEdge 12th Generation Servers

$
0
0

Dell PowerEdge customers wanting detailed BIOS tuning information for performance or power efficiency, look no further!  The BIOS Performance and Power Guidelines for Dell PowerEdge 12th Generation Servers whitepaper has just been released, and contains a wealth of BIOS option information to allow the user to optimize their server for their specific needs.   The performance and power impacts of various options are discussed and charted in great detail, and offer an invaluable guide for optimization.  This whitepaper can be used to tune all Dell 12th Generation servers across all three processor architecture offerings (Xeon E5-2400, E5-2600, and E5-4600 families).

 

The three primary BIOS Setup areas covered in detail are:

  • BIOS Memory Settings Section
  • BIOS Processor Settings Section
  • BIOS System Profile Section

Among many options discussed are the impacts of Intel Hyper-Threading Technology (otherwise known as Logical Processors).  This example below shows the performance impact of disabling Hyper-Threading on a variety of SPEC CPU2006 integer workloads.  Additional comparisons can be found in the whitepaper.

The impacts of Intel Turbo Boost Technology on performance and power are also charted.   Since this option can have a major impact on both performance and power efficiency, a number of workloads were used to characterize Turbo Technology behavior.  Below is a sample chart from the whitepaper which uses one of many SPEC CPU2006 workloads to characterize Turbo.

In addition, the System Profiles are compared for performance and power across multiple workloads.  Additionally, idle power is examined across all System Profiles and the impact of C-States and C1E are detailed.  Custom profiles and all of the exposed sub-options are analyzed in detail.  The chart below displays the results of a SPECpower_ssj2008 comparison between the four System Profiles as well as one Custom Profile. 

For much more detailed information, please consult the whitepaper

 

Happy Tuning!

Design Methodology for Exchange Server 2010 on Dell Active System 800v

$
0
0

Designing and implementing Exchange Server 2010 solutions on a virtualized infrastructure requires a different approach than the traditional way of designing Exchange Server 2010 solutions on physical infrastructure. Engineers at Dell Global Solutions Engineering have proposed a novel design framework and implementation methodology on recently launched Dell's Active System 800v - a converged infrastructure solution that combines servers, storage, networking, and infrastructure management into an integrated and optimized system that provides virtualized resource repositories.

The design framework proposes a stepwise approach to design solutions for Exchange Server 2010 deployments on Active System 800v. The goal of the framework is to design Exchange Server 2010 solutions, for various number of mailboxes and profiles, that are independent of the underlying platform. The framework concentrates mainly on identifying and configuring resources including virtual processors, memory, storage, and resource configurations along with rules for VMware Distributed Resource Scheduler (DRS), with respect to various server roles that is part of a typical Exchange server deployment. Because Active System 800v consists of a completely virtualized computing platform based on VMware vSphere 5.1, virtualization best practices from VMware and application best practices from Microsoft are at the foundation of the framework. The outcome of using the design framework is set of virtual resource configurations and DRS rules that are necessary to meet the application best practices and desired performance.

The implementation methodology mainly describes an instantiation of the concepts proposed by the design framework for an example Exchange solution: 5,000 mailboxes with mailbox size of 1GB and 150 messages per day per mailbox with two-copy DAG. The methodology concentrates mainly on applying the design concepts while sizing and configuring storage, CPU, memory and networks along with DRS rules for the virtual machines in the Exchange solution. For example, as an outcome of applying the design concepts, a logical layout of a Database Availability Group (DAG) that can tolerate a storage array or a DAG node failure, is shown below along with allocation for Exchange application networks. Figure 1 depicts one possible distribution of database copies that can function in an event of one storage array failure or any one of the DAG node failure. Figure 2 depicts the allocation of Exchange public and private networks for the derived number of virtual machines.

Figure 1. DAG Layout

Figure 2. Exchange Networks

To ensure the correctness of the implementation, performance tests with tools like Microsoft Exchange Jetstress 2010 and Microsoft Load Generator were carried out along with simulation of various failure scenarios. To obtain more details on the design framework, the implementation methodology and a proof points for the sample implementation of Exchange Server 2010 on Active System 800v, please refer to the design and Implementation guide: Microsoft® Exchange Server 2010 Implementation on Dell™ Active System 800v.

Dell's Active System 800v is one of the members of Dell's Active Infrastructure family that includes Dell PowerEdge™ M1000e blade chassis with Dell PowerEdge™ M I/O Aggregator, Dell PowerEdge™ M620 blades, Dell EqualLogic™ Storage, Dell Force10™ network switches, and VMware vSphere 5.1. Dell's Active System 800v is designed to provide a highly reliable and scalable platform for virtualization, while simplifying infrastructure management. The incorporation of data center standards, like DCB with converged networking, enables Active System 800v to provide throughput benefits for enterprise workloads.

Viewing all 1001 articles
Browse latest View live




Latest Images