Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Active Directory Configuration Setup on Dell PowerEdge 12th Generation Servers Using Lifecycle Controller

$
0
0

Managing usernames and passwords for your data center is a big job, and finding a way to simplify user account and privilege management is not just desirable, it’s a necessity.  While the iDRAC GUI that comes with Dell PowerEdge 12th generation server enables Active Directory (AD) setup—even remote setup—account management remains largely a manual and tedious task.  What’s needed is a solution that makes AD setup and management programmatic, automatic, and remotely accessible.

Dell Lifecycle Controller’s remote API provides just these capabilities, requiring only that you write and run your WS-MAN scripts. The API allows you to set up your iDRAC servers, configures users and their associated privileges. Users can then submit their AD credentials to authenticate their access to all iDRACs, the iDRAC GUI, SSH, and telnet consoles, as well as for running RACADM and WS-MAN commands from the CLI.

When introducing a new Dell PowerEdge 12th generation server to the system, you need only to run the WS-MAN script to automatically set up your AD on that new server.

See Active Directory Configuration Setup on 12G Servers Using Lifecycle Controller for detailed instructions on how to set up AD for Dell PowerEdge 12th generation servers, including

  • The Structure of the Active Directory Environment
  • Standard Schema or Extended Schema
  • Set up Active Directory Service
  • Set up the AD Attributes
  • Check the Setting
  • Test the Setting
  • Build Active Directory Server
  • Configure iDRAC for use with Active Directory Standard
  • Test your Standard Schema configuration
  • Sample WINRM Commands and  Mapping to iDRAC GUI Display Names

With Dell PowerEdge 12th generation server and Dell Lifecycle Controller, your users need only to manage one username/password for all your iDRAC servers, and you, the IT administrator, only need to manage one username/password, instead of hundreds, for each user.


  iDRAC with Lifecycle Controller Technical Learning Series, Lifecycle Controller Home : iDRAC7 Home


What are the partitionable NIC cards for Lifecycle Controller 2 (LC2)?

$
0
0

The NIC Partitioning feature of CNAs allows administrators to organize multiple tasks on each port and intelligently distribute the full capacity of each port’s total bandwidth across the various partitions.

To read more about this feature, see this blog.

To find out which cards support this feature, see the table below.

 The following cards support partitioning for Lifecycle Controller 2
QLogicBroadcom
QMD8262 bNDC – D90TX57810 bNDC – JVFVR
QLE8262 NIC – JHD5157810 SFP+ NIC – N20KJ
QME8262 Mezz – 9Y65N57810 Base-T NIC – W1GCR
57810 Mezz – 55GHP
57800 rNDC – MT09V

 

How do I tell if my server has one of these cards using Lifecycle Controller? See this blog.

Now that I know I have one of these cards, how do I set it up for partitioning? See this blog.

Related links:

 

Keeping Dell Quickstart Data Warehouse Appliance Healthy with Microsoft System Center Monitoring Tools

$
0
0

You have a Dell Quickstart Data Warehouse Appliance installed in your data center, you are doing analytics on your data with a business intelligence tool, such as Toad BI and things are looking good, right?

More than likely, yes, however to make the most of your data warehousing and business intelligence experience you want to be sure everything stays healthy. Enter the Microsoft System Center Operations Monitoring plugin for the Dell Quickstart Data Warehouse Appliance.

Why use System Center to monitor my appliance?

  • Enables a drill down to view  components specific to the appliance

What is required?

  • Quickstart Data Warehouse Appliance
  • SQL Administrator account
  • System Center Operation Monitor 2007R2 or higher
  • Domain Administrator privileges

For those users that are familiar with Microsoft System Center Operations Manager, the addition of a management pack is a straightforward process. For me, a complete novice to System Center, I relied heavily on the detailed instructions that accompany the management pack which can be found on the same page as the management pack on the page System Center Monitoring Pack for Microsoft Data Warehouse Appliance V2. The guide covers all the steps you need to perform on both the System Center side as well as on the appliance in order to begin monitoring. The information was somewhat spread out in the guide so it is best to do a read through before starting.  

Once System Center was set up and monitoring my appliance, it was easy to see system health and how individual components were performing. Of course, the major part of the data warehouse appliance is the SQL server and with the monitoring tool you can view health down to the table level, as illustrated below

When I set up my environment, I used System Center Operations Manager 2012 to do my monitoring since I was not familiar with any version of the software prior to this and wanted to use the latest version. The 2012 version has many new features and gives users options like Linux/Unix monitoring capabilities, templates for all dashboards and the concept of resource pools making System Center Highly Available out of the box. For more information about new features of Microsoft System Center Operations Manager 2012 visit the following: What's New in System Center 2012 - Operations Manager

 

Supported Fiber Channel cards in the new Lifecycle Controller 1 (1.6.0 release)

$
0
0

On March 26, 2013 a new firmware version was released for Lifecycle Controller 1 (LC1). This release added support for several Fiber Channel (FC) cards.

Download the new image, here

To find out which cards support this feature, see the table below.

The following cards support Fiber Channel (FC)
QLogicEmulex
QLE2660 Single Port FC16 HBA LPe16000 Single Port FC16 HBA
QLE2660 Single Port FC16 HBA (LP)LPe16000 Single Port FC16 HBA (LP)
QLE2662 Dual Port FC16 HBA LPe16002 Dual Port FC16 HBA
QLE2662 Dual Port FC16 HBA (LP)LPe16002 Dual Port FC16 HBA (LP)
QME2662 Dual Port FC16 HBA MezzanineLPm16002 Dual Port FC16 HBA Mezzanine

How do I tell if my server has one of these cards using Lifecycle Controller? See this blog.

Related links:

So Say SMEs in Virtualization and Cloud Season 2 Episode 8 Dell | VMware - ATX VMUG with a Sports Twist

$
0
0

So Say SMEs in Virtualization & Cloud

Kong Yang and Todd Muirhead give their impressions of the ATX VMUG and conclude with a sports discussion. The ATX VMUG presentation files are available at vmug.com. Please note that in order to download the files you will need to log in as a VMUG member. It's easy and free to become a VMUG member just follow the directions. As always, we welcome your thoughts and feedback. 

Please click below to view the video.

(Please visit the site to view this video)

Interview with Cloudscaling’s Eric Windisch on the Simplicity Scales engineering blog

$
0
0

Cloudscaling just recently launched an engineering blog named Simplicity Scales: “This blog is about Cloudscaling’s engineers, engineering culture, and perspective on building elastic infrastructure clouds that don’t suck.” (Randy Bias, Cloudscaling). Below an overview over the first posts at the Simplicity Scales blog:

Rafael: Can you introduce yourself, Eric? What are you doing? Who are you?

Eric: I'm a Principal Engineer at Cloudscaling. I have been at Cloudscaling for two years now, and I am an active developer and participant in the OpenStack community. I believe I have been - at least for the last 12 months - the second highest contributor to the Oslo shared library project within OpenStack. Also, I've been contributing to the Simplicity Scales blog.

Rafael: Tell us about the Simplicity Scales blog.

Eric: Simplicity Scales blog was set up to provide an outlet for Cloudscaling engineers to express their thoughts and views to the world and to allow them to show some of the cool things that they're working on. The name comes from the idea that simple, loosely coupled systems are more scalable and fault tolerant than complex, tightly coupled ones. Simplicity Scales reflects our philosophy of building cloud and product design.

Rafael: Can you give some examples?

Eric: Sure. Presently I put out a post about the ZeroMQ work that I've been doing. That includes mostly the things that I did in Grizzly; it wrapped up this latest development cycle so that there was sort of a change log for that work that people could go and reference. One of the topics we will talk about is the Python 3 compatibility theme. That's an effort that I'm personally working on. Also, we are going to talk about service inventory and health in OpenStack and how to have awareness of the hosts that are in there.

Rafael:  Do you plan to expand the author base to non-Cloudscaling engineers?

Eric: At this time we only have plans for Cloudscaling engineers.

Rafael: I noticed that you barely have any Cloudscaling branding at the Simplicity Scales blog. Why?

Eric: The idea is to present the engineers as themselves. These are Cloudscaling engineers, but these are not necessarily the thoughts and opinions of Cloudscaling. That’s one of the biggest differences between the Simplicity Scales blog and the Cloudscaling company blog. So, at Simplicity Scale we can allow ourselves some more free range.

Rafael: Can you give examples of topics where you deviate from Cloudscaling’s view?

Eric: Sure. I'm working on a post regarding running OpenStack Nova on a Raspberry Pi (laughs). This is something that Cloudscaling is not looking to actually do. It’s something I did for fun, and I just thought: “Hey, it would be kind of neat to let people know if you were to do this - for whatever reason - what it actually looks like.” And in no way this relates to what's being done at Cloudscaling.

Rafael: Thank you very much, Eric. I am looking forward to read more at the Simplicity Scales blog.

Eric: You’re welcome, Rafael.

Simplicity Scales resources

http://engineering.cloudscaling.com/

@ewindisch

@simplicityscale (no typo here - Twitter allows only 15 character handles)

Dell Open Source Ecosystem Digest #14. Issue Highlight: "OpenStack Grizzly Released"

$
0
0

“OpenStack Grizzly, the seventh release of the open source software for building public, private, and hybrid clouds, has more than 230 new features to support production operations at scale and greater integration with enterprise technologies. The OpenStack community continues to attract the best developers and experts in their disciplines with more than 517 contributors making 7,620 updates in the Grizzly release.” OpenStack Foundation

(Please visit the site to view this video)

This week’s interview

Nathen Harvey on Opscode’s “Incidend Command System Workshop”



What is ICS in a nutshell?

The Incident Command System (ICS) Is a flexible, scalable system that provides a common framework for providing emergency response and managing resources for incidents. Developed in the 1970s in response to massive wildfires in California, ICS is the standard, battle tested system used throughout the U.S. fire service.

 

Whom are you targeting with your workshop?

Operations, developers, R&D, engineering and executives engaged in emergency response for service affecting incidents.

 

What requirements do you need to meet in order to attend?

No pre-requisites.

 

What will attendees learn during your workshop?

Workshop will provide an overview of ICS, description of ICS positions/roles and case studies.

 

Can you explain what role Chef plays in ICS?

Chef and ICS both provide standardized and flexible systems for elegantly enabling massively scalable operations.

 

How can people learn more about ICS best practices?

FEMA's website is a great place to start!

ICS Workshop blog post.
Register for the ICS Workshop here.

DevOps

OpenStack

OpenStack is an open source cloud operating system. You can find an overview here. You can find a vast collection of blogs written by OpenStack community members at Planet OpenStack. Below please find a compilation of content provided by Dell partners, friends and employees.

Hadoop

The Apache Hadoop project project develops open-source software for reliable, scalable, distributed computing. Learn more at the Apache Hadoop site. You will find content created by our partners who are part of our Dell Emerging Solutions Ecosystem.

Contributors

 

Please find detailed information on all contributors in our Wiki section.

 

Contact

 

If you have any feedback, suggestions, ideas, or if you’d like to contribute - I’ll be happy to hear back from you.

 

Twitter: @RafaelKnuth

Email: rafael_knuth@dellteam.com

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #23

$
0
0

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (Florian_Klaffenbach@Dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

 


Featured Posts of the Week!

I Was Honored With The Dell Rockstar Recognition by Didier van Hoye

Remote Desktop to Your Hyper-V Server From Android Tablet by Lai Yoong Seng

PowerShell Script that Relaunches as Admin by Keith Hill

KB2796995 – Offloaded Data (ODX) Transfers Fail On A Windows 8 or Windows Server 2012 Computer by Aidan Finn


 

Azure

#WindowsAzure Virtual Network Services for Hybrid #Cloud with System Center 2012 SP1 #sysctr by James van den Berg

Events

I Was Honored With The Dell Rockstar Recognition by Didier van Hoye

Exchange 2013 CU1 has been release and now? by Johan Veldhuis

Hyper-V

Test Hyper-V VHD Folders with PowerShell by Jeffery Hicks

Dismount ISO or DVD From the VM By Using Powershell by Lai Yoong Seng

Remote Desktop to Your Hyper-V Server From Android Tablet by Lai Yoong Seng

Configure CPU Compatibility For Migration Using Powershell by Lai Yoong Seng

E-Book: Ein Hyper-V-Cluster für unter 2000$ in German by Nils Kaczenski

Windows Server 2012 Hyper-V Live Migration Does Not Support Passthrough Disks by Aidan Finn

Can I Replicate Virtual Machines From iSCSI, Fibre Channel, SAS, Internal to … ? by Aidan Finn

End of Support Dates For Free Hyper-V Server by Aidan Finn

Converting Hyper-V VHD-files to VHDX using Windows PowerShell by Jan Egil Ring

Office 365

Office 365 Upgrades Are Coming by Aidan Finn

Himmlische IT Podcast Folge 26: Office 365 MVP Summit in German by Kerstin Rachfahl

PowerShell

#PSTip View Policy Settings with RSOP WMI classes by 

#PSTip Detecting Wi-Fi adapters by 

PowerShell Script that Relaunches as Admin by Keith Hill

System Center Virtual Machine Manager

Application virtualization servers with the App-V Server from VMM 2012 in Portuguese by Marcelo Sinic

 SQL Server

#PSTip List all fixed disk drives visible to SQL Server instance by Ravikanth Chaganti

#PSTip Change SQL Server MaxServerMemory configuration using SMO by Ravikanth Chaganti

Windows Client

How to activate Windows 8 Enterprise by Thomas Maurer

100% Pure Speculation – Will Windows “Blue” (“8.1”) Upgrade Require A New License? by Aidan Finn

You Have 365 Days To Replace Windows XP by Aidan Finn

Windows Server Core

Get Local Admin Group Members in a New Old Way  by Jeffery Hicks

How Old is the Admin Password by Jeffery Hicks

KB2796995 – Offloaded Data (ODX) Transfers Fail On A Windows 8 or Windows Server 2012 Computer by Aidan Finn

Cloning Domain Controller by Lai Yoong Seng 


Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management
Kristian Nese - MVP for System Center Cloud and Datacenter Management
Ravikanth Chaganti - MVP for PowerShell
Jan Egil Ring - MVP for PowerShell
Jeffery Hicks - MVP for PowerShell
Keith Hill - MVP for PowerShell
David Moravec - MVP for PowerShell
Aleksandar Nikolic - MVP for PowerShell
 - MVP for PowerShell
Adam Driscoll - MVP for PowerShell
Jeff Wouters - MVP for PowerShell and the man who had his own Newsletter Category before ;)
Marcelo Vighi - MVP for Exchange
Johan Veldhuis - MVP for Exchange
Lai Yoong Seng - MVP for Virtual Machine
Rob McShinsky - MVP for Virtual Machine
Hans Vredevoort - MVP for Virtual Machine
Leandro Carvalho - MVP for Virtual Machine
Didier van Hoye - MVP for Virtual Machine
Romeo Mlinar - MVP for Virtual Machine
Aidan Finn - MVP for Virtual Machine
Carsten Rachfahl - MVP for Virtual Machine
Thomas Maurer - MVP for Virtual Machine
Alessandro Cardoso - MVP for Virtual Machine
Steve Jain - MVP for Virtual Machine
Robert Smit - MVP for Cluster
Marcelo Sinic - MVP Windows Expert-IT Pro
Michael Bender - MVP Windows Expert-IT Pro
Ulf B. Simon-Weidner - MVP for Windows Server – Directory Services
Meinolf Weber - MVP for Windows Server – Directory Services
Nils Kaczenski - MVP for Windows Server – Directory Services
Kerstin Rachfahl - MVP for Office 365
Matthias Wolf - MVP Group Policy
Robert Mühsig - MVP ASP.NET/IIS


Interview with Nathen Harvey on Opscode’s “Incidend Command System Workshop”

$
0
0

 

What is ICS in a nutshell?

The Incident Command System (ICS) Is a flexible, scalable system that provides a common framework for providing emergency response and managing resources for incidents. Developed in the 1970s in response to massive wildfires in California, ICS is the standard, battle tested system used throughout the U.S. fire service.

Whom are you targeting with your workshop?
Operations, developers, R&D, engineering and executives engaged in emergency response for service affecting incidents.

What requirements do you need to meet in order to attend?
No pre-requisites.

What will attendees learn during your workshop?
Workshop will provide an overview of ICS, description of ICS positions/roles and case studies.

Can you explain what role Chef plays in ICS?
Chef and ICS both provide standardized and flexible systems for elegantly enabling massively scalable operations.

How can people learn more about ICS best practices?
FEMA's website is a great place to start!

ICS Workshop blog post.
Register for the ICS Workshop here.

Top Ten Reasons to attend Dell Enterprise Forum 2013!

$
0
0

Hello world wide web!  I have been asked to write another of my top ten reasons to attend blogs.  This year however we are talking about some seriously cool things!  What started with the EqualLogic User Group and the Compellent C-Drive conferences, jointly became the Dell Storage Forum last year.  Well 2013 has some cool things in store for you!  This year not only are we having all of our awesome storage platforms in attendance but we are changing the conference to the Dell Enterprise Forum!  Because honestly, storage is just one part of the overall end to end converged enterprise environment and we want to showcase how amazing our Dell Servers, Networking, Services and Software are in addition to our amazing Storage.

This is THE one stop event for channel partners and customers to get some serious technical information, peer networking, best practices, and sneak peeks.  DEF happens June 4th-6th 2013 and it is jam packed!  DEF this year is going to be held in San Jose California.  For more information, including how to register for the event, check out the website at:  www.dellenterpriseforum.com

So let's begin with our Top Ten Reasons to attend Dell Enterprise Forum 2013!

#10.  Breakout Sessions!  With 100 different sessions that range from deep dive technical to best practices for implementation, there is no shortage of information to get from DEF.  This is the most jam packed conference we have done!  You can download the Agendas on the website and a quick view will show you there is so much information!

#9.  Instructor led training.  Are you a new customer or just want to get some more training while you are there?  Then sign up for one of our Instructor Led Training Courses.  These are available either before the show starts or afterwards and is a great way to get more training out of your stay.

#8.  Ask the Experts!  Throughout the show you will have access to highly technical subject matter experts (some who actually designed the stuff!) to talk to them about various things.  Do you have a question about your environment?  Or how about planning for a future project and want to get nitty gritty on the tech involved?  Go ahead and ask because that's why we are there!  We let our engineers out of their cubicles so they can come talk to you about the stuff they are working on.

#7.  Networking with peers and colleagues.  This isn't just a conference where you come in, sit down, watch a slide presentation and go home.  Oh no!  Half of the conference is just being in an environment with peers in your industry and sharing tips, concepts, best practices, and ideas.  You may find out the person next to you at lunch just finished implementing the same project you are preparing and may have loads of ideas for you!  During breakout sessions there are always lively discussions around what customers are doing and how they did it.

#6.  Better Together!  This year marks a first for us as we bring not only Dell Storage under one roof, but everything else that makes a working environment.  Are you a server customer and just want to learn about storage?  Do you have storage but didn't know about our amazing networking?  Did you know we have a ton of software offerings to meet almost any challenge you have?  Now in one spot you can learn the end to end solution provided by Dell and see how all of these things truly are better together for your environment.

#5.  Channel Partners, Vendor Partners, and the Expo Hall.  We couldn't do this all on our own so not only will we be there but we are bringing some of our best partners and channel partners to the show.  Visit our partners on the show floor and even attend specific partner breakout sessions to learn how their products can enhance your data center.

#4.  Hands on Lab.  Over the years our hands on lab has become a huge part of Dell Storage forum and this tradition will continue for DEF 2013.  We are even staying open later and opening sooner so people don't have to miss any sessions.  With everything from an introduction to the equipment to advanced integration with operating systems, the hands on lab allows you to go in and practice some of the things you learned in the sessions or just get your hands on some environments that you can work on without fear of breaking them.  With over 30 hours of available content it's a huge benefit to attendees.

#3.  Fun!  Even though the show is crammed full of technical information and labs and training, we still have a good time.  With our customer appreciation event, reception dinner and even fun things around the area we are all there to get a lot of learning and have a bit of fun.  Besides, can anyone complain about being in San Jose in June?

#2.  What's new and Sneak Peeks!  Not only will you get information on the newest software and hardware that has been released but this is the only place where you can go into some of our super secret NDA sessions which roadmap out the next year or two of plans.  We even do some live demos of stuff that usually works!  Sometimes ... it's pre-alpha but when it does whoohoo!

And if all this hasn't convinced you yet, the number one reason to attend Dell Enterprise Forum 2013 is ...

#1.  Keynotes and the Direction of Dell!  Where else can you go and hear keynotes from some amazing top executives on not only the direction of Dell, but the direction of the industry and how we plan to be there with you every step of the way.  The Keynotes are a fantastic discussion on where we are going and how we are going to get there and should not be missed.

So there you have it, my top ten reasons to attend DEF in San Jose.  Granted, I could have written another 20 things!  This is one of the best shows around and this year with the addition of Networking, Software, Services and Servers, it will be one heck of a show.  Year after year we get amazing complements on the technical depth of the show.  People come away knowing more about their environment then they ever did before and we think that is the true mark of a great tech conference.

So what are you waiting for?  Submit those requests!  Talk to your sales rep for more information and we will see you in San Jose!

Keep it virtual :)

-Will

As always: I work for Dell as a Product Solutions Engineer in the Nashua Design Center working on VMware integration with the EqualLogic PS Series SAN products. The views and discussions on this blog are my own and do not necessarily represent my
Employer. The idea of this blog is to educate and entertain and hopefully it will accomplish that.

You can follow me on twitter at @VirtWillU.

Also, check out Dell TechCenter for these videos and other content on EqualLogic

MMS 2013: Automating Platform Updates for Dell PowerEdge Failover Clusters

$
0
0

This blog post was originally written by Michael Schroeder. Send your suggestions or comments to WinServerBlogs@dell.com.

To read more technical articles about Windows Server 2012 and Dell, go to the Windows Server 2012 page on Dell TechCenter.

Introduction

This week at the 2013 Microsoft Management Summit in Las Vegas we’re demonstrating system management features that improve datacenter efficiencies and platform management with Windows Server® 2012.

Microsoft Windows Server® 2012 introduces Cluster-Aware Updating (CAU), a feature that streamlines the update process across your Windows Server 2012 clusters. Most are familiar with CAU in terms of patch management for applying Windows Updates; general distribution releases for the operating systems itself. CAU can also automate BIOS, firmware & driver updates end-to-end for your Dell PowerEdge Failover Clusters. The Dell Update Packages (DUPs) and the Server Update Utility (SUU) are compatible with the native hotfix plug-in to support CAU deployment. This allows you to easily orchestrate the updates across your Dell PowerEdge Failover Clusters with minimal effort. Here are a few more benefits for leveraging CAU:    

  • No service downtime with Hyper-V or CA Clusters
  • Leverages PowerShell v3.0 and can be further enhanced
  • Improves the update reliability for the IT Administrator
  • Significant time savings over manual updates
  • Adds a consistent IT update process to your environment

CAU in Action

(Please visit the site to view this video)

 

Key Resources

* Whitepaper: Update Dell Servers with Microsoft Windows Server 2012 Cluster Aware Update by Integrating SUU/DUP (includes XML configuration files): http://en.community.dell.com/techcenter/extras/m/white_papers/20217029.aspx

* How CAU Plug-ins Work: http://technet.microsoft.com/en-us/library/jj134213

MMS 2013: Windows Server 2012® and Dell 12th Generation Servers - Network Port Management with CDN

$
0
0

This blog post was originally written by Thomas Cantwell and Michael Schroeder. Send your suggestions or comments to WinServerBlogs@dell.com.

To read more technical articles about Windows Server 2012 and Dell, go to the Windows Server 2012 page on Dell TechCenter.

Introduction

This week at the 2013 Microsoft Management Summit in Las Vegas we’re demonstrating system management features that improve platform management and datacenter efficiency. A few of these features relate to Windows Server® 2012.

CDN – Consistent Device Naming

Have you ever had to rename or renumber the network ports in the OS to match the physical ports numbered on the back of a server after OS installation?

Background

PnP enumeration is the standard for automatically identifying devices and ensuring the right drivers are installed for a device.  It has been around for decades.  But, PnP enumeration is non-deterministic.  This means it will discover and load drivers in the order received, and there is no way to ensure one device will be discovered ahead of another device. This is why devices in the OS are not necessarily “in order”, and are mismatched to the physical ports on the servers. Renaming network ports in hosts with a large NIC count isn’t fun. Now magnify that by a large number of servers.

Dell engineering considered this customer pain point and brought forward the solution to the PCI SIG (Special Interest Group) with a specification change to address this.  Dell also worked with Microsoft to incorporate this capability into Windows Server® 2012, where Consistent Device Naming (CDN) is now available to help address this issue when deploying Windows.  

Below is our MMS deck covering CDN:

 

Summary:   CDN simplifies datacenter system and network port management – deployments for hosts or clusters can be simplified, and Microsoft System Center applications are beginning to leverage this capability.

MMS 2013: Leveraging PowerShell 3.0 CIM Cmdlets and the iDRAC/Lifecycle Controller

$
0
0

This blog post was originally written by Aditi Satam and Michael Schroeder. Send your suggestions or comments to WinServerBlogs@dell.com.

To read more technical articles about iDRAC/Lifecycle Controller and Dell, go to the iDRAC page on Dell TechCenter.

Introduction

This week at the 2013 Microsoft Management Summit in Las Vegas we’re demonstrating system management features that improve datacenter efficiencies and platform management with Windows Server® 2012.

PowerShell and Dell PowerEdge iDRAC System Inventory -

Have you ever wanted to quickly check the hardware inventory for a host without powering on the server?

Background

DMTF (Distributed Management Task Force – see them at the MMS DMTF booth #718) has developed several standards that Microsoft Windows Server® 2012leverages:

1)    WS-MAN (Web Services Management)

2)    CIM (Common Information Model)

Microsoft Windows Server® 2012 introduces CIM Cmdlets that are available in PowerShell 3.0 to improve and simplify management of any server or device (such as the Dell™ PowerEdge™ iDRAC) that complies with CIM and WS-Man standards. From a Windows® host, CIM sessions can be established to remotely connect to the iDRAC interface on your Dell 11th and 12th Generation PowerEdge™ servers. Once connected, you have access to a large number of management tasks available that you can perform like changing settings in the BIOS, collecting hardware inventory and many others. You only need power to the system and a network cable connected to the iDRAC. You do NOT need to power up the system – once the system has been plugged in, the iDRAC goes live and is able to capture system inventory for consumption, without bringing the system online.

You can create simple PowerShell/CIM scripts to help you retrieve the system details you care about. The following PS script demonstrates one way you might retrieve the hardware inventory for a target system. The UI is generated in PowerShell as well.

Dell DevOps Leaders Explain How the Movement is Critical to Massively Scalable DataCenters

$
0
0

I recently had the chance to sit down with two DevOps leaders at Dell to learn more about the practice and why it is critical to the success of future enterprise datacenter deployments.

Rob Hirschfeld – Distinguished Engineer, Dell (@zehicle) - Blog– Agile in the Clouds

Judd Milton – Sr. Systems Engineer, Dell (@newgoliath) – Blog– DevOps Theory and Practice 

(Please visit the site to view this video)

TechChat: Dell Fluid File System and Dell integrated NAS

$
0
0

Please join us for a 1-hour TechChat on Fluid File System and Dell’s integrated NAS and SAN solutions. During this online event, technology specialists from the FluidFS Product Management and Technical Marketing teams will be available to answer any questions you may have about Dell scale-out NAS solutions, file system architectures, performance and data protection.

Dell™ Fluid File System is designed to go beyond the limitations of traditional file systems, with a flexible architecture that enables organizations to scale linearly — that is, to grow capacity without diminishing returns on performance. The FluidFS architecture is enterprise class and standard based, supports multiple protocols, including CFS and NFS, and incorporates innovative features for high availability, performance, efficient data management and data protection.   Learn more …

Chat with you later today at 3 PM Central …


OpenStack Board of Directors Talks: Episode 8 with Lew Tucker, VP/CTO Cloud Computing at Cisco Systems

$
0
0

Learn firsthand about OpenStack, its challenges and opportunities, market adoption and Cisco’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!

#1 Takeaway: OpenStack might turn into a multi-billion dollar ecosystem

Rafael: OpenStack claims to be an alternative to Amazon’s public cloud and VMware’s private cloud offering. Both are multi-billion dollar businesses whereas the vast majority of OpenStack related projects are still in a proof of concept phase. How is OpenStack going to catch up?

Lew: I think that’s simply a matter of time and maturation. I am very bullish and believe we will start to see the emergence of a multi-billion dollar ecosystem of companies that are building, deploying, servicing, supporting and running services on top of OpenStack. The IT industry has recognized that we will advance cloud computing much further and much more rapidly by working together on an open source cloud platform such as OpenStack.  And it is clear this is sorely needed in the market.

#2 Takeaway: It’s all about simultaneously driving both innovation and improvements for  production deployments of OpenStack

Rafael: What are the challenges ahead of the OpenStack community?

Lew: The main challenges that I see are in keeping the technical excellence and innovation in the forefront while maturing as a production ready system. Commercial software companies do a very good job at productization, QA, and technical support so that their customers may bring their products into production. I think we are already seeing the OpenStack community take on some of these issues:  how do you easily install it, how do you easily operate and manage OpenStack as a running system?

#3 Takeaway: Walk your talk – Cisco is using OpenStack internally in various ways

Rafael: What is Cisco's experience with OpenStack and how are you guys utilising OpenStack?

Lew: OpenStack is being used internally in a number of different areas. At WebEx, we are running OpenStack in production at several of our data centers as the platform for rolling out  new applications and services.  Our OpenStack engineering team therefore works very closely with the WebEx operations team since I sincerely believe it’s almost impossible to develop good software without running it yourself in production.  There are additional efforts within our IT organisation to roll out an internal OpenStack cloud for use by our engineers for software development, testing, and simulation.

In terms of Cisco’s code contributions to OpenStack,  given our area of expertise in networking, we are particularly focused on enhancing OpenStack’s Quantum network service to meet a broader range of advanced network centric applications.

This effort has led us to work with several of our leading service provider customers on their OpenStack deployment. It’s been rewarding to see the great interest in OpenStack by many leading service providers as they start to plan out their next generation systems, recognizing that cloud platforms are going to be the way they want to roll out their new services.

I think there is now widespread agreement upon in the industry that cloud computing has clearly proven to be the fastest way to develop and deploy applications. Your basic IaaS is a very powerful concept – the application development team doesn’t have to worry about the underlying infrastructure when they’re building or deploying an application.  Some of the service providers we are working with are therefore deploying OpenStack across their data centers not to become public cloud providers but rather to deliver their core services to customers.

#4 Takeaway: OpenStack is moving towards enterprise ready cloud – with Firewalls as a Service, Load Balancing as a Service …

An interesting OpenStack use case is to allow IT to develop and deploy traditional network functions as services integrated into OpenStack.  Think of  Firewall as a Service, Load Balancing as a Service, Monitoring as a Service, Security as a Service …and have the same kind of elasticity that you have in traditional web applications.  In the community we’re already making progress in this area.  This is further enabled in large part by another trend in networking: software defined networking. SDN APIs, operating at the level just below OpenStack Quantum  provide greater control over provisioning, allocation of resources, and retrieval of system information so these systems can be dynamically and elastically scaled through software.  SDN and OpenStack working at different layers but in concert with each other, provide much more capabilities than we actually envisioned earlier on in cloud computing.

#5 Takeaway: OpenStack is transitioning from hype to providing real value

Rafael: What is your view on OpenStack adoption?  Do you see any patterns in the way OpenStack is being adopted?

Lew: One of the things I learned at Sun Microsystems was that you need to be careful about the so-called hype cycle.  People initially thought of Java as well, … the next cure for cancer (laughs).  When that didn’t happen, they said Java was dead. Although the hype cycle may take off very, very quickly, you need to let it settle down, before people start to see the real value in a technology.  In the case of the Java platform, despite it’s enormous early overhyping, it nonetheless has become so widespread in the data center that today it’s considered legacy. With OpenStack we are already seeing the hype subsiding and a realistic view taking hold.  Many organizations are now asking the practical questions: “How does OpenStack apply to my situation? Should OpenStack be the foundational piece of my next generation data center strategy?”  People are running pilots and doing their own evaluations.

Related to this, Geoffrey Moore has been talking about an expected shift in IT from “systems of record” to “systems of engagement.” Systems of record are what we traditionally think of as being handled by IT:  ERP, financial, and HR transactional systems, e.g. traditional kind of business systems. These are fairly static, since you don’t often need to double your employee base or change the fundamentals of accounting over night. Systems of engagement, however, are partner, customer, or supplier facing systems. You have many more customers, partners, or suppliers than you have employees. These systems need to be much more dynamic in how they respond to changing needs, they might be seasonal and they need to be able to adapt much more quickly to changes in the marketplace.  Systems of engagements therefore are ideally suited for deployment on a cloud platform with it’s elasticity, rapid deployment, and self-service capabilities.

These characteristics line up very well with the kind of a cloud platform that OpenStack represents. As businesses start considering how they are going to engage more directly with their customers, start running big data and analytics to learn a lot more about their customers from their interactions with them,  a virtuous cycle is created between customer awareness, big data analytics and cloud computing.  OpenStack may therefore take on a central role in these future systems of engagement.

#6 Takeaway: Hardware manufacturers are responding to a need for programmatic control over infrastructure by opening up their interfaces

Rafael: How do you think will OpenStack and cloud in general influence hardware providers?

Lew: There is a need for more and better programmatic control over infrastructure. I think hardware manufacturers are responding by opening up interfaces. At Cisco we are making all of our interfaces open, so that these can be addressed by software.

I sometimes get asked: “Well then, isn’t cloud computing going to commoditize infrastructure?” I don’t think so. There’s always this synergistic relationship between hardware and software, and with a service like OpenStack Quantum, we now have a way for applications to make requests of the underlying infrastructure, which is then most easily accomplished when the infrastructure exposes software API’s. Applications and infrastructure working together deliver the best user experience.

#7 Takeaway: Application developers are welcome as participants – in order to better understand their needs and build those into OpenStack

Rafael: Are there any contributors that you are missing in the OpenStack community?

Lew: I think what we're looking for particularly in this next year is to expand the OpenStack ecosystem to attract new software application and tools companies to build on top of the OpenStack platform.   While they may not be direct contributors to the platform itself, it is critically important that we can understand their requirements. So I think you’ll start to see concerted efforts being made to reach out to the application developer community to attract a growing set of apps and services running on top of OpenStack. In the end, it’s always the Apps that matter.

Rafael: Thank you very much, Lew.

Lew: You’re welcome.

Managing M1000e Chassis using CMC via serial connector

$
0
0

This blog post has been contributed by Carl Kagy, Phil Webster and Kalyani Khobragade from CMC team.

The Dell Chassis Management Controller (CMC) allows you to utilize its serial connection to deploy, monitor, configure and update the chassis components. You can use it to perform extensive management actions with the Racadm CLI tool. 

This blog will show you how to connect to the CMC serial port. This connection can be used to drill through for access to the server COM ports within the chassis. 

 A typical scenario for use is shown below 

 

As shipped from the factory, the CMC serial port is enabled.  The CMC serial port default settings are 115200 baud, 8 bit data, no parity and one stop bit (115200,8,n,1). The workstation serial port settings must match the CMC settings. You must connect to the Active CMC (denoted by the blue LED) on a chassis that has two CMCs.

You will connect a null modem cable from the workstation serial port connector to the DB9 connector on the CMC as shown.The CMC requires hardware flow control signals, as shown below.  Use of flow control guarantees complete transfer of all characters.

 

Once the cable is connected and your terminal emulation application is running on the workstation, tap the return key once or twice.  You should see the CMC login prompt.  Login to the CMC.  You can display the serial port settings with the "racadm getconfig" command as shown below.

 

The settings can be changed using the "racadm config" command; however, the workstation serial settings must be changed to match!  Alternatively, the settings can be changed through a network interface such as telnet or SSH, both of which offer the same CLI as the serial port.

To connect to a server, use the "racadm connect" command as shown in the screenshot above.  When finished with the session to the server, type the escape sequence (by default "<ctrl>-\") to return to the CMC command line. 

Some of you will want to connect with the ability to handle all binary data, for debug sessions.  Use the "-b" option of the "connect" command for this.  Note that you cannot use the escape sequence in a binary session.  You must reset the CMC to break the session.  On a chassis with two CMCs, resetting the CMC will cause the other CMC to become active (it will show the blue LED).  You will need to move the cable over to the other CMC.  To avoid this, you can temporarily remove the second CMC until you have completed debugging.

Now that you have access to the serial port, you can use server OS facilities for debug and configuration.

 

OpenStack Board of Directors Talks: Episode 9 with Alan Clark, Chairman of the Board

$
0
0

Learn firsthand about OpenStack, its challenges and opportunities, market adoption and SUSE’s engagement in the community - from Alan Clark, Chairman of the Open Stack Board and Director of Industry Initiatives, Emerging Standards and Open Source at SUSE. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!

#1 Takeaway: OpenStack aims to deliver the industry’s best open source cloud software

Rafael: Can you give a brief overview over where OpenStack is now?

Alan: The foundation exists to create a ubiquitous cloud platform. We want to deliver the best open source cloud software out there. I think we are doing a great job with that. I was looking at the latest data, and there has been over 3,000 patches in the last quarter submitted to OpenStack. That's a lot of work that the community is putting in. We’ve got around 800 contributors, we're averaging about 200 contributors a month participating in the project, and we’re way over 7,000 members in the community.

A priority that we've been working on is accelerating the adoption of OpenStack and getting recognition for those that have already implemented after trying OpenStack in their environments. One of the first things we did last fall was to create the user group. It’s not just another user group, we're actually getting them involved to the provide feedback into the development of the software. Lastly, we are making efforts to educate educating our ecosystem, so people know what the different components of the software do, how they can install, administer, do updates and so forth.

#2 Takeaway: It’s tough to compete with a consortium of 150 enterprises backing OpenStack

Rafael: OpenStack aims to be an alternative to Amazon and to VMware. Both are fully operational businesses at a billion-dollar scale, whereas OpenStack is in a proof-of-concept phase. How does the OpenStack community intend to close that gap?

Alan: First, there are lot of companies that have already deployed OpenStack or using it in their businesses day-to-day, and they are not small businesses. If you look at the case studies that have been put out - AT&T, Rackspace, eBay and others, those are big companies and they are using OpenStack successfully. We need to educate on that, and I think we're pretty weak in the area of getting that news out.

Thinking about the number of companies and number of people that are involved, we get a community of over 7,000 members. We've got over 800 developers working on this cloud and over 150 companies that are involved.  VMware and Amazon are essentially competing with a consortium of 150 companies. That's tough, that's really tough. Although I think it’s not entirely fair to characterize VMware as competing because they're actually a member of the OpenStack community.

#3 Takeaway: Billions of smart sensors everywhere – in the future, cloud computing might propel an ecosystem of new services

Rafael: How do you envision the cloud computing industry in five years? What will be different compared to now?

Alan: It's very young technology, and I believe that cloud computing is going to look very different in the next 5 to 8 years. We're going to have billions of smart sensors everywhere all over the world, for example at your home, where they are going to keep track of electricity you're using, whether your securities are on or off etc. All that data is going to turn into services. Instead of reactive services they're going to become proactive, they will require a lot of active computing, database processing and so forth. Those are going to create new, different and very exciting services over the next four or five years.

Ideally, much of that is going to be done in cloud because it's going to have to be extremely elastic, very flexible - ebb and flow like. Cloud computing is a very natural fit for those. So I believe cloud is going to have to transform in order to fit those types of services.

#4 Takeaway: SUSE Cloud is a complete OpenStack implementation tied with SUSE Linux Enterprise

Rafael: How do you position SUSE within the OpenStack ecosystem?

Alan: SUSE focuses on the enterprise environment, and we’ve been shipping a product called SUSE Cloud.  It is the complete implementation of OpenStack tied with SUSE Linux Enterprise. The reasons why SUSE picked OpenStack is the robustness of the community as well as customers’ feedback. SUSE’s decision is based on customer demand, strength, vibrancy and innovative nature of the OpenStack project, and the track record as well as the ability of the community to deliver software.

Rafael: Thank you very much, Alan!

Dell Open Source Ecosystem Digest #15. Issue Highlights: Interviews with SUSE, Cisco and Cloudscaling

$
0
0

I posted a couple of interviews this week, and expect a lot more to come next week during the OpenStack Summit, which I am attending jointly with Stephen Spector, Cloud Evangelist at Dell.

This week’s interviews

Interview with Cloudscaling’s Eric Windisch on the Simplicity Scales engineering blog

Cloudscaling just recently launched an engineering blog named Simplicity Scales: “This blog is about Cloudscaling’s engineers, engineering culture, and perspective on building elastic infrastructure clouds that don’t suck.” (Robert Cathey, Cloudscaling). Below an overview over the first posts at the Simplicity Scales blog:

Rafael: Can you introduce yourself, Eric? What are you doing? Who are you?

Eric: I'm a Principal Engineer at Cloudscaling. I have been at Cloudscaling for two years now, and I am an active developer and participant in the OpenStack community. I believe I have been - at least for the last 12 months - the second highest contributor to the Oslo shared library project within OpenStack. Also, I've been contributing to the Simplicity Scales blog.

Rafael: Tell us about the Simplicity Scales blog.

Eric: Simplicity Scales blog was set up to provide an outlet for Cloudscaling engineers to express their thoughts and views to the world and to allow them to show some of the cool things that they're working on. The name comes from the idea that simple, loosely coupled systems are more scalable and fault tolerant than complex, tightly coupled ones. Simplicity Scales reflects our philosophy of building cloud and product design.

Rafael: Can you give some examples?

Eric: Sure. Presently I put out a post about the ZeroMQ work that I've been doing. That includes mostly the things that I did in Grizzly; it wrapped up this latest development cycle so that there was sort of a change log for that work that people could go and reference. One of the topics we will talk about is the Python 3 compatibility theme. That's an effort that I'm personally working on. Also, we are going to talk about service inventory and health in OpenStack and how to have awareness of the hosts that are in there.

Rafael: Do you plan to expand the author base to non-Cloudscaling engineers?

Eric: At this time we only have plans for Cloudscaling engineers.

Rafael: I noticed that you barely have any Cloudscaling branding at the Simplicity Scales blog. Why?

Eric: The idea is to present the engineers as themselves. These are Cloudscaling engineers, but these are not necessarily the thoughts and opinions of Cloudscaling. That’s one of the biggest differences between the Simplicity Scales blog and the Cloudscaling company blog. So, at Simplicity Scale we can allow ourselves some more free range.

Rafael: Can you give examples of topics where you deviate from Cloudscaling’s view?

Eric: Sure. I'm working on a post regarding running OpenStack Nova on a Raspberry Pi (laughs). This is something that Cloudscaling is not looking to actually do. It’s something I did for fun, and I just thought: “Hey, it would be kind of neat to let people know if you were to do this - for whatever reason - what it actually looks like.” And in no way this relates to what's being done at Cloudscaling.

Rafael: Thank you very much, Eric. I am looking forward to read more at the Simplicity Scales blog.

Eric: You’re welcome, Rafael.

Simplicity Scales resources

http://engineering.cloudscaling.com/

@ewindisch

@simplicityscale (no typo here - Twitter allows only 15 character handles)

DevOps

OpenStack

OpenStack is an open source cloud operating system. You can find an overview here. You can find a vast collection of blogs written by OpenStack community members at Planet OpenStack. Below please find a compilation of content provided by Dell partners, friends and employees.

Hadoop

The Apache Hadoop project project develops open-source software for reliable, scalable, distributed computing. Learn more at the Apache Hadoop site. You will find content created by our partners who are part of our Dell Emerging Solutions Ecosystem.

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

If you have any feedback, suggestions, ideas, or if you’d like to contribute - I’ll be happy to hear back from you.

Twitter: @RafaelKnuth

Email: rafael_knuth@dellteam.com

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #24

$
0
0

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

 


Featured Posts of the Week!

Is Live Migration Supported Pass Through Disk in Windows Server 2012 Hyper-V? by Lai Yoong Seng

KB2821052 – "0x000000D1" Stop Error When Opening MPIO Snap-In WS2012 Computer by Aidan Finn

Prevent errors while using SPWebConfigModification by Thorsten Hans

Add drivers to Windows 8 ISO Image by Thomas Maurer


 

Azure

Windows Azure Recovery Services: Backup & Hyper-V Recovery Manager (Preview) by Thomas Maurer

First look at Microsoft #WindowsAzure Backup Services Preview by James van den Berg

Watch Now the Microsoft Management Summit #MMS2013 sessions on Channel 9 #sysctr #SCVMM #WindowsAzure by James van den Berg

#Microsoft System Center Management Pack for #WindowsAzure Fabric – Preview #sysctr by James van den Berg

Azure Access Control Service, Microsoft Account Login, JSON Web Token und ASP.NET – “ohne WIF-Magic” in German by Robert Mühsig

 

Exchange

Exchange Server 2013: Besser noch abwarten in Germany by Nils Kaczenski

The Exchange 2013 alphabet: Database Availability Group – part 1 by Johan Veldhuis

 

Events

MMS2013: SD-B303 How to Build Your Strategy For a Private Cloud by Didier van Hoye

Key Take Aways From MMS2013 by Didier van Hoye

Interview noch von der MVP-Summit 2013 mit Didier van Hoye MVP Virtual Machine by Kerstin Rachfahl

 

Hyper-V

Hyper-V Recovery Manager – Orchestration of Hyper-V Replica Failover by Aidan Finn

My Interview With SearchServerVirtualization – Getting Started With Hyper-V by Aidan Finn

Windows Server 2012 Hyper-V Installation And Configuration Guide Is Shipping In Europe by Aidan Finn

KB2819564–Corruption Occurs When Merging Data Deduplication-Enabled VHDs In WS2012 by Aidan Finn

KB2821052 – "0x000000D1" Stop Error When Opening MPIO Snap-In WS2012 Computer by Aidan Finn

KB2823643 – VMs Freeze At "Stopping" State After You Shutting Them Down On WS2012 Hyper-V Cluster by Aidan Finn

KB2823958 – Virtual Switch Extension Can’t Send Packets Separately Through Each NIC In WS2012 Hyper-V by Aidan Finn

vSphere Doesn’t Need Any Security or Bug Fixes by Aidan Finn (Aidan SMASH!)

Is Live Migration Supported Pass Through Disk in Windows Server 2012 Hyper-V? by Lai Yoong Seng

Creating Virtual Machine Using Powershell–Part 1 by Lai Yoong Seng

Iron Networks Announces Windows Server 2012 Hyper-V Network Virtualization Gateway Appliance by Thomas Maurer

System Center 2012 SP1 – Virtual Machine Manager support for VMware vSphere ESX Hosts by Thomas Maurer 

 

PowerShell

#PSTip Refreshing service objects by 

#PSTip Wait for a Service to reach a specified status by 

File Age Groupings with PowerShell by Jeffery Hicks

Get CIMInstance from PowerShell 2.0 by Jeffery Hicks

Friday Fun: Get-Anniversary by Jeffery Hicks

 

Sharepoint 

Prevent errors while using SPWebConfigModification by Thorsten Hans

 

System Center Core

Update Rollup 2 for System Center 2012 Service Pack 1 by Thomas Maurer

 

System Center Virtual Machine Manager

Hyper-V Converged Fabric with System Center 2012 SP1 – Virtual Machine Manager by Thomas Maurer

The Most Under-Appreciated & Under-Used Feature Of VMM: VM Templates by Aidan Finn

Interview with Allessandro Cardoso about his book: System Center Virtual Machine Manager 2012 Cookbook by Carsten Rachfahl

 

Windows Client

Add drivers to Windows 8 ISO Image by Thomas Maurer

 

Windows Server Core

The Death Of The Windows Service Pack … Or Is It? by Aidan Finn

Microsoft Virtualisierungs Podcast Folge 28: SMB 3.0 Teil 3 in German by Carsten Rachfahl

NIC Showing “Network Cable Unplugged” on Intel NIC by Lai Yoong Seng

#PSTip Discovering Active Directory FSMO Role Holders using PowerShell by 

 

Tools

Analyzing Performance with Server Performance Advisor 3.0 by Marcelo Sinic

PowerGUI Script Editor gets full support for PowerShell 3.0 by Ravikanth Chaganti

 


Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management
Kristian Nese - MVP for System Center Cloud and Datacenter Management
Ravikanth Chaganti - MVP for PowerShell
Jan Egil Ring - MVP for PowerShell
Jeffery Hicks - MVP for PowerShell
Keith Hill - MVP for PowerShell
David Moravec - MVP for PowerShell
Aleksandar Nikolic - MVP for PowerShell
 - MVP for PowerShell
Adam Driscoll - MVP for PowerShell
Jeff Wouters - MVP for PowerShell and the man who had his own Newsletter Category before ;)
Marcelo Vighi - MVP for Exchange
Johan Veldhuis - MVP for Exchange
Lai Yoong Seng - MVP for Virtual Machine
Rob McShinsky - MVP for Virtual Machine
Hans Vredevoort - MVP for Virtual Machine
Leandro Carvalho - MVP for Virtual Machine
Didier van Hoye - MVP for Virtual Machine
Romeo Mlinar - MVP for Virtual Machine
Aidan Finn - MVP for Virtual Machine
Carsten Rachfahl - MVP for Virtual Machine
Thomas Maurer - MVP for Virtual Machine
Alessandro Cardoso - MVP for Virtual Machine
Steve Jain - MVP for Virtual Machine
Robert Smit - MVP for Cluster
Marcelo Sinic - MVP Windows Expert-IT Pro
Michael Bender - MVP Windows Expert-IT Pro
Ulf B. Simon-Weidner - MVP for Windows Server - Directory Services
Meinolf Weber - MVP for Windows Server - Directory Services
Nils Kaczenski - MVP for Windows Server - Directory Services
Kerstin Rachfahl - MVP for Office 365
Matthias Wolf - MVP Group Policy
Robert Mühsig - MVP ASP.NET/IIS
Thorsten Hans - MVP Sharepoint


Viewing all 1001 articles
Browse latest View live




Latest Images