Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

So Say SMEs in Virtualization and Cloud: Episode 35 VMworld 2012

$
0
0

In 'So Say SMEs in Virtualization & Cloud' Episode 35, Kong Yang and Todd Muirhead talk about the experience that is VMworld 2012. 

Please click below to view Episode 35. And don't forget to let us know what you think.

(Please visit the site to view this video)

Dell at VMworld 2012 Useful Links

  1. Dell at VMworld 2012 
  2. Dell at VMworld Virtual Pavilion
  3. DellTechCenter.com User Group at VMworld 2012: DTCUG-ip
  4. August 7th at 3-4PM CT - DellTechCenter.com Tech Tuesday Chat - Dell at VMworld 2012


Dell at VMworld 2012 TechTuesday Chat

$
0
0

Join DellTechCenter.com on August 7th at 3PM Central Time as we cover Dell at VMworld 2012. The URL for DellTechCenter.com's Tech Tuesday Chat.

Our featured guest is:

  • Jeff McNaught, Executive Director, Marketing and Chief Strategy Officer, Cloud Client Computing at Dell, @wysejeff.
    • Jeff is the Executive Director of Marketing and Chief Strategy Officer at Dell Wyse. Jeff is widely considered the most quoted spokesperson for thin and cloud client computing in the world. With the experience of hundreds of speaking engagements, articles and press interviews, he is considered an authority on the topics of cloud client computing, green IT, desktop virtualization and cloud computing. Jeff co-invented and spearheaded the development of the award-winning Wyse Winterm thin clients, the first enterprise thin clients to ship in volume, now with millions of units in use worldwide. Jeff joined the Wyse team in 1987 after holding positions at ITT, Netexpress, Tandy, and Cromemco. He earned his MBA from Pepperdine University.

(Please visit the site to view this video)

Kernel Debugging Over Network (KDNET) in Windows Server 2012 Articles

$
0
0

This blog was originally written by Abhijit Khande & Gobind Vijayakumar from DELL Windows Engineering Team.

Comments are welcome! To suggest a blog topic or make other comments, contact WinServerBlogs@dell.com.

 

With the Microsoft® Windows Server® 2012 Operating system Microsoft has introduced a new method of debugging known as KDNET which is used for debugging over network.  In this blog we will discuss KDNET and how it can be configured and used on Dell Servers.

In previous editions of Microsoft® Windows Operating Systems, Kernel Debugging was performed through Serial Cable, USB and 1394 (Firewire). These methods introduced several challenges like serial connectivity being slow; USB requiring special hardware which may not be cost effective and 1394 port being rarely available on all the servers.

These challenges can be overcome by newer Kernel Debugging over network.

Kernel Debugging over network has many advantages over the debugging done via other methods

  • Multiple times faster than Serial Cable
  • Less Hassle in Setup
  • More Flexible
Dell Engineers have written 2 detailed how to articles on how to implement KDNET on Dell PowerEdge servers.  For much more information about Kernel Debugging over Network on Dell servers, visit the following in depth articles:

Dell vStart 50 management server configuration update

$
0
0

This post written by James H. Meeker, Global Product Marketing Manager, Dell Converged Solutions

The vStart team has announced an update to the vStart 50 management server, now qualified with the PowerEdge R420.

Why is this important: For most of our environments, flexible deployment of a vStart is a critical requirement. vStart allows for flexibility without impact to the architecture or key certification elements in the solution stack. The R420 introduces advanced server capabilities, at a competitive price to host the vStart 50 management stack.

For small- and medium-sized deployments, a decision needs to be made during the purchase process regarding the management server in the configuration. For example, should the customer already have a dedicated management server, they may wish to use the vStart solution as a virtualization farm for expansion. There are many opinions on this: experienced data center administrators will say that a dedicated management server is necessary when troubleshooting an error-event – it provides a known configuration to start from. Virtualization professionals may counter that a virtual management server with a virtual copy provides similar benefits, without the costs associated with having a dedicated management server.  

Forcing a management server into every configuration does not work either. For example, when it becomes time to expand a vStart by purchasing a second unit, the management server is not necessary at all – the design of vStart enables a cost improvement per VM as you scale it out by not requiring this overhead for each rack. For additional information on the vStart family of solutions, visit dell.com/vstart

Private to Hybrid Cloud with vCloud Connector – How To

$
0
0

VMware has released a series of new blogs highlighting various procedures to manipulate virtual machines in a vCloud environment. These blogs are useful for IT administrators looking to understand how to properly operate a vCloud solution. To support Dell customers, I am taking some of that content for a look at how to move workloads from a private vCloud to a hybrid vCloud.

Step 1: Download and Install the vCloud Connector

The application is available as a download on the vCloud Connector page with installation instructions for server and node deployments.  The diagram below from the VMware Installation and Configuration Guide details the architecture of vCloud Connector.

  • The vCloud Connector UI appears in either the vSphere client or at vcloud.vmware.com for the administrator to move workloads between the private and hybrid cloud.
  • The vCloud Connection Server is an appliance running in its own virtual machine managing the entire process and UI for virtual machine movements.
  • The vCloud Connector Node(s) is an appliance that transfers the virtual machines from one cloud to another. 
  • Browser administration is also available at vcloud.vmare.com as well as the vSphere client.

Step 2:  Add the Hybrid vCloud & Authenticate

Once you have installed the vCloud Connector appliances, you will need to configure your external hybrid cloud to allow workload transfer. The diagram below from the VMware blog shows the information necessary to authenticate to your hybrid cloud for workload transfer.

Step 3: Move Virtual Machine Workloads b/w Clouds

Now that you have the hybrid cloud connected to your private cloud, you can simply select the workloads to move from or to the hybrid cloud. All virtual machines must be stopped when moving between clouds.

In the diagram below, I show the data transfer process as detailed in the VMware Installation and Configuration Guide.

  1. VM transfer request sent from UI to server appliance
  2. Server appliance sends information to node appliance
  3. Node appliance sends “export” request to vCenter server
  4. Content is copied from datastore to node appliance cache
  5. Content is transferred from node appliance to other cloud node appliance
  6. New cloud node appliance tells vCloud director to import virtual machine(s)
  7. Content transferred from new cloud node appliance to vCloud director storage
  8. vCloud director notifies the vCenter serve to import the virtual machine(s)
  9. Content transferred to datastore for cloud usage

 

PowerEdge R820 - An Efficient Consolidation Platform for SQL Server 2012 Database Solutions

$
0
0

This blog is posted on behalf of Vinod Kumar from Database Solutions Engineering team.

Deploying a database workload on virtualized environment always seems to be a challenge because of its I/O intensive workloads. It would certainly great to see the scalability of concurrent database virtual machines on an excellent server platform.

The latest Dell PowerEdge 12th generation servers provide the robustness and reliability for a highly efficient database consolidation platform. PowerEdge R820, with the multi-core Intel® Xeon® processor E5-4600 product family support, may be beneficial in delivering outstanding performance for server. The PowerEdge R820, with 2-socket and 4-socket configuration options, can feature up to 32 cores of latest generation server-class processing power. This server supports up to 16 TB of internal storage and up to 1.5 TB of memory.

The database solutions engineering team at Dell carried out several tests to evaluate the database consolidation capabilities of PowerEdge R820, with Microsoft Hyper-V as the hypervisor layer. The studies were conducted using SQL Server 2012, the latest database release from Microsoft. The study was consolidated into the technical whitepaper named Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820.

This whitepaper discusses some of the possible virtualized reference configurations using PowerEdge R820, SQL Server 2012 and Hyper-V.

Given below is the proposed sample reference architecture for a highly available virtual machine configuration.
 
  
  
Tests were conducted on PowerEdge R820 to verify the scalable performance achieved using 2vCPU and 4vCPU database VM configurations. The virtual machines were concurrently stressed using standard OLTP (TPC-E) workloads.

We could observe that the virtual machines scaled up without any major performance impact until up to 85% to 90% of the overall server utilization. Using a 4 processor cores per socket (HT disabled), a single R820 was able to scale up to 4 large virtual machines (each with 4 vCPUs). At the same time, it could accommodate 7 Medium virtual machines (each with 2 vCPUs) without any major impact in the performance. The result highlights are depicted in the below figure.


  

We did observe a performance degradation of individual VMs once the entire server utilization was beyond around 90%.  This observation makes it obvious to have a constant monitoring of the overall utilization of the virtualized system and sizing it optimally to meet the workload requirements.

The complete details of the configuration and the performance evaluations can be found in the whitepaper linked here: http://en.community.dell.com/techcenter/enterprise-solutions/m/sql_db_gallery/20154974/download.aspx

Updated Lync Server 2010 Reference Architectures and Sizing Data

$
0
0

This post is authored by Akshai Parthasarathy, Baaskar R, and Andrew Coleman from Global Solutions Engineering

In April of this year, the Global Solutions Engineering team delivered a Lync Server 2010 Virtualization Whitepaper and an accompanying Reference Architecture booklet. We also went into more detail through blogs, showing the virtual host configuration for Lync Server 2010 that we used, and presented how we scaled out and measured Lync performance when testing our architecture. Based on feedback, we have enhanced these deliverables with additional Back End SQL database results and more information about the Edge Servers.

The data generated for both of these deliverables was obtained by using the Lync Server 2010 Stress & Performance Tool and the sizing guidance provided in the Server Virtualization in Lync Server 2010 whitepaper.

Updated Whitepaper with SQL Database Results

The whitepaper that centered on the sizing of Lync Server is now updated with Back End SQL database sizing information. This paper now recommends where to place the SQL database and log files, and it also recommends LUN sizes, number of CPU, memory size, and disk latency for the SQL components. We conducted tests by varying the number of user logins per minute from 100 to 200 to 400 users per minute. This represents the peak number of users that login to Lync. For instance, our test methodology represents the real world scenario when 100, 200, or 400 Lync users login to the client between 8 and 9 o’clock in the morning.  Our data indicates that with a single PS6100XV Equallogic storage array (24 x 146GB 15k disks), disk latencies are sufficiently low for minimal impact to Lync performance, even for the maximum  user login rate of 400 users/minute. This login rate corresponds to conditions that may be experienced in a fairly large Lync deployment of about 25,000 users, assuming a login period of one hour (8am – 9 am). Shown below is a figure from the whitepaper for these Back End SQL disks and their latencies.

As shown above, the disk performance on a single Equallogic array with 15k spindles is adequate for this login rate. Based on this data, the whitepaper also recommends a sample reference configuration for 12,000 Lync users which now includes a validated Back End component and a recommendation for Edge Servers in the perimeter network.

Updated Reference Architecture Booklet

We at the Global Solutions Engineering team have also updated the Lync Reference Architecture booklet to include the latest PowerEdge servers, Equallogic storage, and PowerConnect and Force10 switches. This booklet provides a design overview of the server, storage and networking required for deployments starting at 500 and scaling up to 25000 users. Edge Servers are now included in these designs and the back-end is sized based on our findings in the lab.  This updated RA booklet can be found here.

We hope you find these papers useful. We look forward to any comments on this blog post!

Dell SecureWorks Launches Advanced Threat Resource Center

$
0
0

Last week, the Dell SecureWorks team announced a new online resource focused on advanced cyber threats. The Advanced Threat Resource Center contains information from various security leaders in the industry as well as the Counter Threat Unit research team at Dell SecureWorks who are on the leading edge in discovering and solving for the latest threats. 

The interactive site contains Webcasts, Videos, Whitepapers, and even a survey tool to score your ability to respond to a major incident as well as other resources.

Dell is proud to be an industry leader in security and the new resource center is an excellent way for our customers to engage directly on this critical issue.


Dell’s New Ready-to-Use Reference Config for Exchange Server 2010 Deployments

$
0
0

This blog post is authored by Mahmoud Ahmadian, Megha Jayaraman and Rahul Srivastava from Global Solutions Engineering team.

Messaging requirements invariably diverge on different counts from one establishment to another and in view of ever-growing platform capabilities, designing just the apt-sized Exchange Server 2010 infrastructure is considered a momentous challenge. However, aided by Dell’s Exchange Server 2010 Solution Manuals that weed out the affiliated design complexities, Dell customers can directly map their establishment’s messaging requirements with the pertinent solutions accessible therein, thus accelerate the time-to-value and bring in tangible business value.

Dell’s new Exchange Server 2010 Reference Architectures are purpose-built, pre-sized and pre-validated messaging solutions that incorporate seamless synthesis of outstadning performance, service availability and scalability coupled with cost-effectiveness, simplified management and reduced footprint. Dell Reference Configurations are based on three architecture models which are specifically designed for the size/scale, capacity and fault-resiliency requirements of Exchange Server environments, while keeping performance and cost in mind.

Agile Consolidated Model This model focuses around virtualization strategy for hosting maximum possible mailboxes (above 1000) on Dell servers and SANs. This model is designed for customers primarily interested in enhancing efficiency and reducing costs through consolidation and standardization. By leveraging centralization and sometimes virtualization, this model is designed for robust, efficient, high-performing, and highly available and recoverable Exchange Server 2010 deployments. In particular, the agile consolidated model combines Dell PowerEdge 12th generation servers with virtualization, Dell EqualLogic™ arrays or Dell Compellent Series, Dell Force 10 switches to enhance resource utilization, efficiency, and manageability. The table showcases the latest addition of configurations to this solution manual.

Server Storage Number of Mailboxes Max Size of Mailboxes Max User I/O Profile
PowerEdge R620 EqualLogic PS4100E 1,500 4 GB 200 messages per day
EqualLogic PS4100X 2,500 3 GB 200 messages per day
PowerEdge R720 EqualLogic PS6100E 5,000 2 GB 100 messages per day
Compellent SC40 5,000 (Virtualized) 2 GB 100 messages per day
PowerEdge M620 EqualLogic PS6500X 10,000(Virtualized) 1 GB 150 mesages per day
Compellent SC40 25,000(Virtualized) 2 GB 100 messages per day

Simple Distributed Model: This model entails consolidated and cost-effective configurations with internal/external (DAS) storage and leverages Exchange Server 2010’s features of continuous availability. These configurations are designed for organizations planning to deploy 1000+ mailboxes and to whom, ease of management and simplistic deployment are prime concerns.  Exchange Server 2010 native high availability features provide reliability to the environment. In this model, Exchange Server 2010 Multi-role (Mailbox, HUB, and CAS) server, along with “storage bricks” is combined to give an end-to-end solution. Customers can either choose internal server storage or external storage (DAS) for multi-mailbox role server. The latest manual on Simple Distributed Model showcases optimized Exchange Server reference architectures using Dell PowerEdge 12th generation servers and Dell PowerVault Storage as seen below.

Server Storage Storage Model Number of Mailboxes Max Mailbox Size Max User I/O Profile
PowerEdge R720xd Not Applicable Internal Storage 1500 3 GB 150 messages per day
4000 1 GB 150 messages per day
5000 5 GB 150 messages per day
7500 1.5 GB 150 messages per day
PowerEdge R720xd MD1200 External Storage 2500 3 GB 100 messages per day
5000(Multi-Site) 3 GB 150 messages per day
PowerEdge R620 MD1200 External Storage 5000 (Multi-site) 3 GB 100 messages per day

Small Branch Office Model: This is an energy-efficient, dense and affordable model that allows organizations to make the most out of the internal storage of Dell PowerEdge 12th generation servers and allows hosting of up to 1000 mailboxes, on a single server platform.  This model has been designed to cater to requirements of small-sized organizations and branch/satellite offices with considerations of reliable performance, consolidated form factor and simplified management at a reduced cost.  The table below shows a glimpse of the latest configurations added to this solution manual.

Server Storage Number of Mailboxes Max Mailbox Size Max User I/O Profile
PowerEdge T420 Internal 500(Virtualized) 2 GB 150 messages per day
500 4 GB 150 messages per day
PowerEdge R520 Internal 500 4 GB 150 messages per day
PowerEdge R420 Internal 250 4 GB 150 messages per day

Sample configurations in all of the Dell’s Architecture Models enable customers to design and deploy cost-effective messaging solution with high availability, ease of management and the ability to scale, as per business requirements.

iDRAC with Lifecycle Controller: Scripting Remote Firmware Update

$
0
0

This post is co-written by Raja Tamilarasan and Chris Poblete

Dell’s 12th generation of PowerEdge servers come equipped with the second generation of embedded server management, iDRAC7 with Lifecycle Controller.

When updating BIOS or other firmware in your server, you are probably used to downloading the update executable file from support.dell.com and simply run the update on the host OS. It’s only a few steps and in moments you reboot your system for the update to take effect. It sounds nice in an ideal situation.

What if you had to do this on hundreds servers ?
What if you had to do it on a server without an OS ?
What if you want to run the update during a scheduled downtime ?
What if you could improve your overall update experience ?

iDRAC7 comes to the rescue. Lifecycle Controller in iDRAC7 offers you an update solution through its WSMAN interface, a standards-based data messaging over a secure SSL connection.  This interface is an API but it can be easily scripted using tools already available in the OS such as winrm or client libraries such as from the opensource OpenWsman project.

The remote update environment looks like:


You put all the relevant update files in a (3) network share that supports a download service and run your update scripts in a (1) client system. The following summarizes the update process:

  1. Enumerate DCIM_SoftwareIdentity to select the component to be updated.
  2. Invoke the InstallFromURI method with the download info including appropriate update file.
  3. Schedule the update job with a reboot job to execute at a time of your choosing.
  4. At the scheduled time, the system reboots and the update is executed.
  5. Thereafter, you may check the update job for success or failure.

See the full article in this whitepaper: 

   Scripting WS-MAN Firmware Updates



  iDRAC with Lifecycle Controller Technical Learning Series

So Say SMEs in Virtualization and Cloud: Episode 36 Beginner's Guide to VMworld

PowerEdge M820 bests the ProLiant BL660c Gen8 in SPEC benchmark performance

$
0
0

HP recently touted that the ProLiant BL660c Gen8 blade had achieved four world records in the SPEC CPU2006 benchmark.  With this week's launch of the Dell PowerEdge M820, a 4 socket blade based on Intel's new Xeon E5-4600 series processors, I wanted to highlight the M820's advantage over the BL660c Gen8 in SPEC CPU2006 and in other important SPEC benchmarks.

First up, the SPEC CPU2006 speed single-threaded test, where the M820 scores 16% to 21% higher than HP's published scores for the BL660c Gen8.


It's not a surprise that HP didn't include their SPECspeed results in their performance brief on the BL660c Gen8.

In the SPECrate multi-threaded portion of SPEC CPU2006, the results are almost dead even, with the PowerEdge M820 meeting or exceeding the performance of the BL660c Gen8 on every score.

The M820's energy efficiency (4,853 SPECpower_ssj2008 overall ssj_ops/watt) and Java server performance (2,804,562 SPECjbb2005 bops*) are higher than any four socket blade on spec.org.  Unfortunately, HP hasn't published results for BL660c Gen8 on these benchmarks, so we can't know exactly how the two blades would stack up there.  I'll revisit this blog entry in the future if and when HP decides to publish these benchmarks for the BL660c Gen8.

I'd say that as of now, first place for performance for four socket blades goes to the PowerEdge M820.  The M820 also has twice as many hard drives and 50% more DIMM slots than the BL660c Gen8, which is just icing on the performance cake.

 

* Dell PowerEdge M820 (4 chips, 32 cores, 64 threads) 2,804,562 SPECjbb2005 bops, 16 JVMs, 175,285 SPECjbb2005 bops/JVM.  SPEC® and the benchmark names SPECjbb®, SPECpower®, SPEC CPU2006®, SPECspeed®, and SPECrate® are registered trademarks of the Standard Performance Evaluation Corporation.  For latest SPEC benchmark results, visit www.spec.org.

VMworld 2012 - Get the Inside Scoop on Dell Converged Infrastructure

$
0
0

Greetings Dell TechCenter community members!

We know that many of you will attend VMworld 2012 in San Francisco, one of the biggest gatherings of IT professionals in the world.  While you are there, Dell is offering an opportunity for you to get the inside scoop about Dell's converged infrastructure plans.

Will you be at VMworld 2012 in San Francisco? Do any of the following statements sound like you?

  • Do you have hands-on or decision making responsibilities for your virtualization infrastructure?
  • Do you utilize advanced functionality within vCenter or vCloud Director? (e.g., live migration, high availability, DRS and/or storage DRS, etc.)
  • Has your production environment been virtualized for 3 or more years?
  • Are considering a transition to a converged infrastructure or already managing a converged infrastructure?

If some or all of the above sounds like you, how would you like the opportunity to meet with us at VMworld to have a conversation about the future of virtual infrastructure? If this sounds like something you’d like to do, email michelle_peterson@dell.com.

Also, visit the Dell at VMworld 2012 landing page for more information on how to connect with us at VMworld 2012, San Francisco.

A Closer Look at the Dell Cloud Services for SAP Solutions

$
0
0

Dell is proud to offer existing and future SAP customers cloud solutions to enhance and modernize SAP access and management in private and public clouds. These Dell solutions provide SAP customers just-in-time resource allocation, expansion beyond physical resources, and incremental, on-demand capacity based on Dell Cloud Services.

Dell Cloud Starter Kit for SAP Solutions

Thinking about how you can enhance your existing SAP deployments in the cloud? Why not take advantage of the Dell Cloud Starter Kit for SAP services and see first-hand how SAP on the Dell cloud can transform your business. Included in this kit is the following:   

The Dell Cloud Starter Kit for SAP Solutions provides customers with the latest innovation in IT infrastructure via an enterprise proven, secure cloud from Dell. Seeing your SAP workloads moved to a cloud environment from Dell gives customers a range of new benefits and opportunities.

Dell Cloud Development Kit for SAP Solutions

Customers looking to move their SAP solutions to the cloud can take immediate advantage of the Dell Cloud Development Kit for SAP Solutions to obtain a bundled set of products and services. Included in this kit is the following:

  • 1 year of Dell vCloud Datacenter Service with Dell SecureWorks and 24x7 support
  • 32,000 SAPs of Capacity
  • SAP Consulting Services from Dell
  • Dell Boomi Subscription

For more information on how your business can leverage Dell clouds for SAP contact your Dell account representative or visit our website

Offloaded Data Transfer (ODX) in Microsoft Windows Server 2012

$
0
0

This blog post was originally written by Rajkumar MR and Chuck Farah.

Comments are welcome! To suggest a blog topic or make other comments, contact WinServerBlogs@dell.com.

To read more about Windows Server 2012 and Dell, go to the Windows Server 2012 page on Dell TechCenter.

Offloaded Data Transfer Overview

Window Server 2012 and EqualLogic follow SCSI command standards to start a copy request through a series of tokenized responses. The term in this case means that less data packages will be required to communicate the reads and writes from source and destination volumes.  By communicating directly to the EqualLogic array the typical LAN network and Host resource consumption are minimized.

ODX adopted the synchronous offload SCSI command to support advanced MPIO, failover cluster support and avoid SCSI command time-out.

In our earlier ODX blog, we discussed the functionality of ODX features. In that post, both Windows server 2012 as well as Dell Equallogic firmware was in beta stages. As we expected there has been a huge performance gain in the latest release of both the products.

 

Figure 1 ODX vs. STD Copy data flow – Left side represents ODX enabled firmware; Right side represents non-ODX enabled firmware

Note: The standard copy consumes CPU, memory and network on the SAN attached servers, while the Offloaded Data Transfer has little impact on these resources.

In this blog, we will compare the difference as well as some of the best practices from our testing.

Performance Results

The table shown below compares the data transfer performance of Windows Server 2012 with ODX enabled and Windows Server 2012 with ODX disabled (a registry entry was changed to disable this feature).  A copy operation of 11 GB data from one volume within an ODX enabled PS Series Group took on average 20 seconds compared to 108 seconds without ODX. For a use case were the Client is separated by long distances or constrained by network or CPU the impact would be even more impressive. Consider a situation where a user needs to quickly move a virtual machine file while 2000 miles away from the data center. ODX would enable this task which would be almost impractical due to bandwidth and latency constraints using standard copy.

 

Windows Server 2012 Standard Copy

Windows Server 2012 ODX-Enabled

Improvement

CPU Utilization

42% (2.3GHz)

10% (2.3GHz)

~ 76%

Network Utilization

37%~100% (1GigE)

0%~1% (1GigE)

~ 99%

Copy time of 11GB

 

108 seconds

20 seconds

~ 5x

Speed

104MB/s

630 MB/s

~ 5x

Claims are:

  • Up to 76% improvement in CPU utilization *
  • Up to 99% improved network utilization *
  • Up to 5 times improved copy time *
  • Up to 5 times faster data transfer *

*Results based on July 2012 internal Dell testing using Windows Server 2012 ODX Enabled compared to Windows Server 2012 with-out ODX enabled or Standard copy.

This performance results clearly demonstrates the possible improvements for compute resource efficiencies.

 

Note: These tests were done in isolation. There was no additional workload or application process running at the time of tests. If there had been additional workload or application processes running, then there would typically be a more pronounced difference. In short, the more pressed for network bandwidth and CPU cycles, the greater the benefit ODX.

 

Offloaded Data Transfer (ODX) Performance Considerations:

Network between Arrays:

For volumes that have large amounts of data transferring between members (separate pools or same pool) the link between the arrays may be the limiter. For instance if you have members with gigabit Ethernet interfaces and your volume spans more than one array the overall bandwidth would be limited by that network capability within that group or pool.

For greatest efficiencies consider 10GigE arrays in the same group and reading from one array or pool and writing to the other.  This will imply two separate arrays or Pools at the minimum. However transfers still can occur within a single member or Pool.

Disk Technologies

NLSAS is often a very cost effective technology with good performance characteristics, particularly when large sequential data such as ODX are involved. On the other end of the spectrum SSD technology is optimum in very high random read environments. SAS 15K RPM or 10K RPM disks will provide the most predictable performance for a multitude of read/write patterns, block sizes and latency.

Additional I/O Activity

The ability for the PS Series SAN to accommodate multiple I/O loads at once should be considered to set expectations for the copy of data using technologies like ODX. While the ODX copy will generally still perform more efficiently even when there are other workloads present the speed of completion will be dependent on array resources available to perform the reads and writes.

 

Conclusion

Offloaded Data Transfer with Windows Server 2012 and PS Series firmware provides accelerated data movement. The capabilities of this technology help open up new business use cases for PS Series SANs in Microsoft environments.  

For more detailed information regarding tWindows Server 2012, go to the Windows Server 2012 page on Dell TechCenter.

 


New EqualLogic RAID Tech Report considerations and best practices released!

$
0
0

Hey everyone!  I wanted to let you all know that a new Technical Report was recently released called TR1020 EqualLogic™ PS Series Storage Arrays: Choosing a Member RAID Policy.  This is an updated Tech Report and discusses some of the new changes as well as some of the best
practices, recommendations, and performance guidelines to help you as the customer choose the best RAID type for your EqualLogic PS Series members.

You can download the new guide here: http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/19861480/download.aspx

With the new version 6 Firmware for the PS Series SAN, there will be some changes for new arrays during configuration.  An excerpt from the doc discusses which options are available for configuration during setup for these new arrays:

RAID Policy

Configurable

  in GUI

Configurable

  in CLI

Best Practice for

  Business Critical Data?

RAID 6

Yes

Yes

Yes

RAID 10

Yes

Yes

Yes

RAID 50

Yes

Yes

No for
  7200rpm drives 1TB and larger

Yes for
  other drives

RAID 5

No

Yes

No

One thing you’ll notice is that the new recommendation for Raid 5 for all business critical data on any drive type and Raid 50 on 7200 SATA/NLS drives 1TB and larger are no longer best practice. As drives get bigger and bigger, RAID sets take longer to rebuild and the performance and size benefit you get from these RAID levels don't outweight the higher risk to reliability versus RAID 10 or RAID 6.

This doesn’t mean that on an upgrade your existing environment will no longer be supported or not work but it does have some good information on performance characteristics of each RAID type as well as some across the board recommendations and best practices.

So have a look and download it, some great information out there!

As always, the disclaimer:  I work for Dell as a Product Solutions Engineer in the Nashua Design Center working on VMware integration with the EqualLogic PS Series SAN products. The views and discussions on this blog are my own and do not necessarily represent my Employer. The idea of this blog is to educate and entertain and hopefully it will accomplish that.

You can follow me on twitter at @VirtWillU or email me .

Until next time, keep it virtual.

-Will

See more of my posts on my landing page.

Also, check out Dell TechCenter for these videos and other content on EqualLogic.

Improving Exchange's High Availability with Dell AIM

$
0
0

This post was authored by Varun Kulkarni of Global Solutions Engineering

Dell Avanced Infrastructure Manager (AIM) can quickly recover a failed Exchange 2010 mailbox or mutli-role server, automatically, helping organizations to meet their recovery time objective without manual interventions. Dell AIM simplifies the server recovery and significantly reduces the recover time, as it takes hours of manual efforts otherwise, to bring up a failed Exchange server involving tasks like reprovisinoing server hardware and operating system with Exchange server 2010, and redistributing active Exchange databases, to list a few.

Microsoft Exchange Server 2010 Database Availability Group (DAG) enters a degraded state when one or more members of the DAG fail. The DAG members are typically mailbox servers or multi-role Exchange servers hosting Exchange databases that provide High Availability for the Exchange deployment. The degraded state of the DAG is due to the the Input / Output Operations per Second (IOPS) and latency for the failed database not meeting the expected performance. The failed DAG member should be recovered as soon as possible in order to recover the failed databases and reinstate the healthy state of the DAG. Depending on the failure, recovering a failed mailbox or multi-role server can be a complex and time consuming process which may involve significant manual intervention from the IT Department. A successive failure of another DAG member, in case of small or medium sized organizations, may result in loss of precious email data. Thus, delayed recovery of the failed DAG member is not recommended, as it can be catastrophic.

Apart from restoring the DAG to its healthy state in an automated way, AIM facilitates centralized management of the network and compute infrastructure within the Exchange ecosystem. It provides the ability to dynamically configure and manage Exchange MAPI (Messaging API) and DAG replication networks on multiple servers from a single console and provide resiliency to the network adapters. Using AIM, high availability can also be provided to the non-mailbox server roles including the Active Directory (AD) domain controller. A two-copy DAG configuration managed by Dell AIM software is shown here:

Engineers at Dell Global Solutions Engineering, showcase a detailed study of how a failed multi-role Exchange 2010 server can be quickly recovered automatically, restoring the DAG to its original healhy configuration, with AIM software, 12th generation Dell PowerEdge Servers and EqualLogic storage in a white paper, that can found here.

Citrix XenServer, Xen Cloud Platform, & just plain Zen... No, it's Xen, with an X

$
0
0

Everybody has their heads in the Cloud these days. And it sure is getting Cloudy. We love having choices and there are a lot when it comes to choosing a Hypervisor. Dell partners with Citrix to provide Citrix Ready servers and we offer 1 and 3 year subscription advantage licenses for Citrix XenServer Enterprise Edition. In my day to day interactions, invariably, someone will refer to XenServer as Xen (or Zen.) Xen and XenServer are NOT the same thing although they get lumped together often. And now that Xen Cloud Platform (XCP) has entered the sky... I wanted to take a few minutes to distinguish all three to avoid confusion in the future.

Xen

Xen is a hypervisor, type 1 (bare metal) only. Sometimes referred to as a VMM (virtual machine manager), the xen hypervisor requires some real know how to manage. It is open source (GPLv2) and supported by more than 50 solution providers. Xen is the Hypervisor in XenServer, Xen Cloud Platform and other distributions like Oracle VM. Xen is used by Amazon Web Services and many more web hosting services and data center solution providers. First released to the public in 2003, Xen was developed at the University of Cambridge as part of a project called "The XenoServer wide-area computing project." The name derives from the Greek word (xenos), which means foreign or unknown. Efficiently "hypervising" the guest operating systems running paravirtualized and safely disconnected from the system's hardware resources, Xen is unique in the way it manages these tasks. The abstract for a 1999 paper, "Xenoservers: Accountable Execution of Untrusted Programs" sounds like a blue print for IaaS (infrastructure as a service) and/or PaaS (platform as a service). I found this description quite interesting.

"We propose a system that can execute code supplied by an un-trusted user, yet can charge this user for all resources consumed by the computation. Such servers could be deployed at strategic locations throughout the Internet, enabling network users such as content providers to distribute components of their applications in a manner that is both efficient and economical."

To wrap this part up, "Xen" is an incredibly small program, approximately 150,000 lines of code, that creates the hypervisor, managing all of the guest operating systems and system resources much like the supervisor program in a conventional operating system. The Xenoserver project spawned the creation of the company called XenSource for the purpose of developing commercial products based on Xen but left Xen as an open source solution. In 2007 XenSource was acquired by Citrix Systems, Inc. Xen is included in the Linux mainline kernel since the release of Linux 2.6.37 in 2011 and Citrix has a firm commitment to keep Xen open source.

 

XenServer

Shortly before the acquisition by Citrix, XenSource released XenEnterprise v4, a commercial product that moved XenSource into the Enterprise. These commercial products provided a tool stack that made using Xen easy. Other products were XenExpress and XenServer. With the next release of the commercial products, Citrix renamed them as XenServer Express, XenServer Standard Edition, and XenServer Enterprise Edition. There is also now a XenServer Platinum Edition. The installation is very easy and getting a VM up and running is really easy when using the windows based management program called XenCenter.

XenServer is like an operating system with Xen in it that makes it easy to use. Or should I say, easier to use than Xen alone? Stacked on top of the Xen kernel are a wide range of tools that are designed to take virtualization to the next step. While Citrix products like XenDesktop, XenApp, and Kaviza are hypervisor agnostic, Citrix plans to make XenServer the best performing and preferred hypervisor on the market to run these amazing products. XenServer has a wide range of features and with each release it just keeps getting better and better.

 

Xen Cloud Platform (XCP)

In 2009 Xen.org announced the Xen Cloud Platform (XCP) initiative. We all know how effective open source can be. In 2011, less that two years later, the open source community released the first version of XCP. Bottom line, XCP is the open source equivalent to XenServer (Free Edition) but adds some features only available in XenServer Advanced, Enterprise, and Platinum. Check out this table that shows the differences at http://wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix. Both CloudStack and OpenStack support Xen and XCP deployments.

 

Just remember, Xen is only the Xen kernel, XenServer is Xen + marvelous tools for all of your virtualization needs, and Xen Cloud Platform (XCP) is the open source version of XenServer that really kicks it up a notch. Look forward to more articles on XenServer and XCP in the future.

If there is a particular topic regarding XenServer or XCP that interests you, comment on this post and we will consider it for future blogs.

Microsoft TechNet: Windows 8 RTM ready to download via MSDN and TechNet

$
0
0

Hello Dell Community, 

starting from 1.00 PM ET Windows 8 RTM is downloadable via TechNet and MSDN!

Download Evaluation

What are your thoughts?
Please share in the comment section below! 

Best regards

Flo 

So Say SMEs in Virtualization and Cloud: Episode 37 vFun at VMworld

Viewing all 1001 articles
Browse latest View live




Latest Images