Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Modular building blocks for a flexible data center

$
0
0

Paul Steeves is a senior marketing manager for the Dell Enterprise Solutions Group

An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

Enterprise computing needs are dynamically changing as business and technology leaders embrace strategic computing innovations to create novel opportunities and gain competitive advantage. The ever-increasing demand for cloud, the exponential expansion of enterprise mobility, the widespread adoption of big data initiatives and the rise of software-defined infrastructures: All these factors drive IT decision makers to evaluate fresh approaches in the data center.

Many IT leaders are looking to adopt the latest application workload paradigms that industry leaders are pioneering. Wherever possible, they want to gain the economic advantages that scale-out technologies have achieved for cloud providers.

Scalable architecture

To address the challenges introduced by the latest computing demands, the Dell PowerEdge FX converged architecture is designed to give enterprises the flexibility to tailor the IT infrastructure to specific workloads — and the ability to scale and adapt that infrastructure as needs change over time.

The FX architecture is based on a modular, building-block concept that makes it easy for enterprises to focus processing resources where needed. This concept is realized through the PowerEdge FX2 chassis, the foundation of the FX architecture. The PowerEdge FX2 is a 2U rack-based, converged computing platform that combines the density and efficiencies of blades with the simplicity and cost advantages of rack-based systems.

The PowerEdge FX2 houses flexible blocks of server, storage and I/O resources while providing outstanding efficiencies through shared power, networking, I/O and management within the chassis itself. Although each server block has some local storage, the FX architecture allows servers to access multiple types of storage, including a centralized storage area network (SAN) and direct attach storage (DAS) in FX storage blocks or in Just a Bunch of Disks (JBODs).

The FX architecture lets data centers easily support an IT-as-a-service approach because it is specifically designed to fit the scale-out model that the approach embraces. At the same time, its inherent flexibility adds value to existing environments. In data centers of all sizes, the FX architecture enables deployments to be rightsized, efficient and cost-effective.

Easy workload optimization

A key design tenet of the FX architecture is ease of workload optimization. The modular blocks of computing resources and the broad range of components available in the FX architecture let data center operators quickly size infrastructure needs to respective workloads.

Server blocks. The rich set of features that are hallmarks of PowerEdge servers make individual FX server blocks especially flexible in the functionality they can offer. For example, the PowerEdge FM120x4 microserver block addresses the requirements of scale-out computing by optimizing power consumption and footprint. Web services providers can benefit tremendously from the high density, easy manageability and cost-effectiveness that the PowerEdge FM120x4 affords. It is also suited for processing tasks such as batch data analytics.

The PowerEdge FC430 is an excellent option for web serving, virtualization, dedicated hosting and other midrange computing tasks. Its extra-small, quarter-width size is designed to make the PowerEdge FC430 one of the densest solutions in the market. The small, modular form factor of the PowerEdge FC430 enables data centers to host a large number of virtual machines and applications on physically discrete servers, minimizing the impact of potential failures on overall operations. This capability makes the PowerEdge FC430 an outstanding choice for distributed environments that require physical separation for security, regulatory compliance or heightened levels of reliability. It also has an InfiniBand®-capable version that enables low-latency processing.

The PowerEdge FC630 server, with its high-performance processors and large memory capacity, can serve as a strong foundation for corporate data centers and private clouds. It readily handles demanding business applications such as enterprise resource planning (ERP) and customer relationship management (CRM), and it also can host a large virtualization environment.

With support for up to four high-performance processors and exceptionally large memory capacity, the PowerEdge FC830 server is designed to handle very demanding, mission-critical workloads of midsize and large enterprises, whether they are large-scale virtualization deployments, centralized business applications or the database tier of web technology and high-performance computing environments.

Storage block. The PowerEdge FD332 storage block provides dense, highly scalable DAS for most FX infrastructures.2 It is a critical component of the FX architecture, enabling future-ready, scale-out infrastructures that bring storage closer to compute for accelerated processing. When used with pass-through mode, it can support software-defined architectures like the VMware® Virtual SAN™, Microsoft® Storage Spaces and Nutanix® platforms.

The PowerEdge FD332 is excellent for consolidation of environments that require high-performance, scale-out storage, such as Apache™ Hadoop® deployments. It also is well suited for dense virtual SAN (vSAN) environments, providing cost-effective, high-capacity hard disk drives (HDDs) that work with solid-state drive (SSD) caches in the server blocks.

I/O blocks. The PowerEdge FN410s, PowerEdge FN410t and PowerEdge FN2210s I/O blocks provide plug-and-play, network-switch layer 2 functions. These powerful I/O aggregators help simplify cable management while also enabling networking features such as optimized east-west (server-to-server) traffic within the chassis, LAN/SAN convergence and streamlined network deployment.

Advanced enterprise management

The FX architecture supports heightened levels of automation, simplicity and consistency across IT-defined configurations. This enables IT administrators to leverage their past experience with Dell OpenManage systems management tools and maintain the field-tested benefits of comprehensive, agent-free management over the entire platform lifecycle: deploy, update, monitor and maintain. Additionally, the FX platform offers administrators a wide range of systems management alternatives and capabilities.

Administrators can elect to manage FX systems like a rack server — locally or remotely — using the Integrated Dell Remote Access Controller 8 (iDRAC8) with Lifecycle Controller. Or they can manage the servers and chassis collectively in a one-to-many fashion using the innovative Chassis Management Controller (CMC), an embedded server management component. These options enable administrators to easily adopt FX servers without changing existing processes.

Each FX server block’s iDRAC8 with Lifecycle Controller provides agent-free management independent of the hypervisor or OS installed. The iDRAC8 can be used to manage and monitor shared infrastructure components such as fans and power supply units. Any alerts are reported by each server block, just as with a traditional rack server.

Alternatively, these same alerts are routed through the CMC when it is used to manage the FX infrastructure. Administrators also can use the CMC’s intuitive web interface to manage the server blocks through the iDRAC8 with Lifecycle Controller or through platform networking.

In addition, the CMC can monitor up to 20 FX systems at a glance, perform one-to-many BIOS and firmware updates, and maintain slot-based server configuration profiles that update BIOS and firmware when a new server is installed. Each of these abilities helps deliver time savings over conventional management and reduce the risk of human-entry errors by automating repetitive tasks.

Finally, OpenManage Essentials and OpenManage Mobile provide remote monitoring and management across FX and PowerEdge servers as well as Dell storage, networking and firewall devices.

Foundation for a future-ready, agile infrastructure

By creating PowerEdge FX as a flexible converged architecture that can grow and advance with the latest technologies, Dell enables enterprises to deploy IT infrastructure that easily adapts to the ever-shifting business and technology landscape. The foundations IT decision makers invest in today are designed to support the changes they implement tomorrow, giving enterprises the agility to remain competitive in a fast-moving marketplace.

Kevin Oliver and John Abrams contributed to this article.

Learn More

Dell PowerEdge FX 

Dell OpenManage 

1 Dell expects to release the PowerEdge FX2 and the PowerEdge FC630, PowerEdge FM120x4 and FN I/O aggregators in December 2014 and the PowerEdge FC430, PowerEdge FC830 and PowerEdge FD332 in the first half of 2015. 

2 The PowerEdge FD332 block does not support the PowerEdge FM120 microserver. 


Adaptable architecture for workload customization, part 1

$
0
0

Paul Steeves is a senior marketing manager for the Dell Enterprise Solutions Group

An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

The Dell PowerEdge FX architecture enables IT infrastructures to be constructed from small, modular blocks of computing resources that can be easily and flexibly scaled and managed.

PowerEdge FX2 chassis: Modular platform

A small, modular foundation for the FX architecture, the flexible and efficient PowerEdge FX2 can be easily customized to fit an organization’s specific computing needs. The PowerEdge FX2 is a standard 2U rack-based platform with shared, redundant power and cooling; I/O fabric; and management infrastructure to service flexible blocks of compute and storage resources. It can be configured to hold half-width, quarter-width or full-width 1U blocks. The switched configuration, the PowerEdge FX2s, supports up to eight low-profile PCI Express (PCIe) Gen 3 expansion slots, which are not included in the PowerEdge FX2.

Part of the shared fabric in the PowerEdge FX2 is reserved for systems management performed through the Integrated Dell Remote Access Controller 8 (iDRAC8) with Lifecycle Controller of each server block. Using this fabric and redundant ports, iDRAC8 can monitor, manage, update, troubleshoot and remediate FX servers from any location — without the use of agents.

Additionally, the PowerEdge FX2 chassis hosts redundant, quad-port pass-through Gigabit Ethernet (GbE) or 10 Gigabit Ethernet (10GbE) I/O modules. Administrators have the option of replacing these modules with FN I/O aggregators that are designed to add more network functionality, simplify physical cabling and reduce the complexity and cost of upstream switching.

Finally, the PowerEdge FX2 chassis helps improve cost-efficiency through its shared cooling and power architecture.

PowerEdge FX2 chassis with eight PowerEdge FC430 servers

PowerEdge FC830 server: Dense, scale-up computing

Providing dense compute and memory scalability and a highly expandable storage subsystem, the PowerEdge FC830 excels at running a wide range of applications and virtualization environments for both midsize and large enterprises. The PowerEdge FC830 is a full-width, four-socket server block that has either eight 2.5-inch drives or sixteen 1.8-inch drives and can access up to eight PCIe expansion slots.

The PowerEdge FC830 provides flexible virtualization with excellent virtual machine density and highly scalable resources for the consolidation of large or performance-hungry virtual machines. In addition, the server can incorporate the use of storage area network (SAN), direct attach storage (DAS) or virtual storage environments.

Individual PowerEdge FC830 server with sixteen 1.8-inch drives

PowerEdge FC630 server: High-performance workhorse

Designed for enterprises looking for high-performance computational density, the PowerEdge FC630 is a powerful workhorse for IT infrastructures. The half-width, two-socket server delivers an exceptional amount of computing power in a very small, easily scalable form factor that includes the latest 18-core Intel® Xeon® processor E5-2600 v3 product family and up to 24 dual in-line memory modules (DIMMs). This means that a 2U PowerEdge FX2 chassis fully loaded with four PowerEdge FC630 servers can be scaled up to 144 cores and 96 DIMMs.

The PowerEdge FC630 offers two internal storage options: an eight 1.8-inch drive configuration and a two 2.5-inch drive configuration, the latter of which supports PowerEdge Express Flash non-volatile memory express (NVMe) PCIe devices. As a result, the PowerEdge FC630 can benefit from the ultrahigh performance and ultralow latency of those devices and also participate as a cache provider in Dell Fluid Cache for SAN infrastructures.

The PowerEdge FC630, like the PowerEdge FC830, takes advantage of other innovations such as PowerEdge Select Network Adapters, Switch Independent Partitioning, fail-safe hypervisors and Dell OpenManage agent-free systems management.

Four half-width PowerEdge FC630 servers in a PowerEdge FX2 chassis

PowerEdge FC430 server: High-density processing

Offering outstanding shared infrastructure density, the PowerEdge FC430 is an excellent choice for data centers where a large number of computational nodes, or virtual machines, are needed to run midtier applications. The quarter-width, two-socket PowerEdge FC430 is designed with the right balance of performance, memory and I/O to deliver the necessary resources to many client applications in a highly efficient fashion.

Accommodating up to eight DIMMs and powered by the Intel Xeon processor E5-2600 v3 product family that supports up to 14 cores, the PowerEdge FC430 provides a tremendous amount of processing resources in an ultrasmall space. By fully loading the PowerEdge FX2 with eight PowerEdge FC430 servers, administrators can scale up to 224 cores and 48 DIMMs.

For caching and storage, each PowerEdge FC430 has one of two configurations: two internal 1.8-inch Serial ATA (SATA) solid-state drives (SSDs) and access to a PCIe Gen 3 expansion slot in the PowerEdge FX2s chassis, or one 1.8-inch SSD with a front-access InfiniBand mezzanine card connection that bypasses the PCI switch. These configurations provide low latency and high throughput for environments like high-performance computing and high-frequency trading. The PowerEdge FC430 supports networking with either a dual-port GbE or a dual-port 10GbE LAN on Motherboard.

Individual PowerEdge FC430 server with two 1.8-inch drives

PowerEdge FM120x4 microserver block: Cost-effective scalability

Built for workloads that prioritize scale-out density and power efficiency over performance, the PowerEdge FM120x4 merges four single-socket PowerEdge FM120 microservers on a single, half-width sled. This design contributes to the microserver block’s impressive density, which is enabled by the innovative system-on-chip (SOC) design of the Intel® Atom™ processor C2000.

A fully loaded, 2U PowerEdge FX2 chassis can host 16 individual microservers, each with two DIMMs and either one 2.5-inch front-access hard disk drive (HDD) or two 1.8-inch SSDs. By using the maximum number of eight-core processors, administrators can add 160 cores and 48 DIMMs to an IT infrastructure with each 2U FX system.

The entry-level PowerEdge FM120x4 is especially well suited to scale-out web services. The Intel Atom processor is engineered to consume very little energy, leading to reduced operating costs, and its SOC design minimizes footprint for added space savings.

Four half-width PowerEdge FM120x4 servers in a PowerEdge FX2 chassis

Read More:

Kevin Oliver and John Abrams contributed to this article.

Anypoint Systems Management e-Book – Beyond Managing PCs, Macs and Servers

$
0
0

Do you ever wish you could be like the guy in the photo? Wouldn't it be cool to have a zillion smartphones, tablets, laptops, MacBooks and Chromebooks all around you to play with? Sure it would.

Oh, wait — you’re in IT. Your co-workers are already coming in to work with all kinds of BYO devices like the ones in the photo. You already have a zillion devices around you. So why don’t you have a smile on your face, the way this fellow does?

Because you have to manage and secure them all, that’s why.

In my last post  I introduced you to Eddie Endpoint and mentioned our new e-book, “A Single Approach to Anypoint Systems Management .” This time I’ll describe how the endpoint world is growing beyond the PCs, Macs and servers you and Eddie have known, and gradually becoming an “anypoint” world.

Anypoint management is here. Are you ready?

We sponsored a Dimensional Research survey a couple of months ago and uncovered what over 700 of your colleagues think about managing these anypoints. Do some of these findings ring a bell with you?

  • In addition to traditional computing devices, 96 percent of those surveyed had printing devices, 84 percent had mobile devices and 53 percent had audio-visual devices connected to their networks.

  • More than half of the survey respondents had three or more systems management tools. That makes sense because lots of administrators have to manage their mobile and BYO devices separately from their traditional computing devices, using mobile device management products and the like. Still, 67 percent of those polled wanted to use fewer systems, which makes even more sense.

  • More than 60 percent of the survey participants were sure, or suspected, that there were unknown devices or applications connected to their networks. Nobody in IT likes that, so it’s a wake-up call to start pulling BYO, mobile and other network-connected devices into mainstream systems management products and keep them from falling through the cracks.

But how are you going to bring all of your anypoints into a single systems management umbrella? Stay tuned for more in my next post. Meanwhile, read Part 1 of our new e-book right now.

 

Anypoint Systems Management e-Book — Nailing Systems Management Basics First

$
0
0

In creating our e-book, “A Single Approach to Anypoint Systems Management,” we looked at the baseline systems management tasks that IT professionals spend the day performing in their traditional endpoint world:

  • Managing hardware and software assets — Inventory, reporting, technology specification, desktop configuration
  • OS deployment and management — Deploying and managing system images for hundreds or thousands of computers on different platforms
  • Application updates — Software distribution and installation of service packs, updates, hotfixes and other digital assets to PCs and servers
  • Patching and configuration enforcement — Performing patching to ensure the latest updates, assessing and addressing vulnerabilities to block potential exploits
  • Service desk — Troubleshooting and resolving user issues quickly to keep workers productive

Of course, those are just the basics. And they add up to no small challenge, to be sure, especially in the face of limited budgets and staffs.

How are you accomplishing these systems management tasks? Are you using spreadsheets? Manual processes? One-off point solutions? Can you see at a glance all of the endpoints on your network? Can you control them and perform most of those baseline systems management tasks on them?

You’ve got to walk before you can run, and you’ve got to handle all of those basics before you can move on to mobile. You might be able to add a mobile device management (MDM) product to your IT toolbox, but if it’s not integrated, you’re asking for more complexity and new cost structures for support, infrastructure and training.

But ready or not, the advent of mobile, BYO and intelligent connected devices is upon you. Every device in your organization must be identified, managed and secured. To keep up in the anypoint systems management world, you’ll need to bring all of your devices under a single systems management solution.

Have a look at Part 1 of our new e-book for more on nailing your systems management basics, to get yourself ready to add mobile, BYO and new connected devices into the systems management mix.

Adaptable architecture for workload customization, part 2

$
0
0

By Paul Steeves


An adaptable IT infrastructure is critical in helping enterprises match specific workload requirements and keep pace with advances in computing technology.

The Dell PowerEdge FX architecture enables IT infrastructures to be constructed from small, modular blocks of computing resources that can be easily and flexibly scaled and managed.

PowerEdge FD332 storage block: Dense and flexible DAS

The densely packed PowerEdge FD332 storage block allows FX-based infrastructures to rapidly and flexibly scale storage resources. The PowerEdge FD332 is a half-width, 1U module that holds up to sixteen 2.5-inch hot-plug Serial Attached SCSI (SAS) or SATA SSDs or HDDs. The PowerEdge FD332 is independently serviceable while the PowerEdge FX2 chassis is operating.

FX servers can be attached to one or more PowerEdge FD332 blocks. For each storage block, a server can either attach to all 16 drives or split access and attach to 8 drives separately.

This flexibility lets administrators combine FX servers and storage in a wide variety of configurations to address specific processing needs. For example, three PowerEdge FD332 blocks can provide up to 48 drives in a single 2U PowerEdge FX2 chassis — while leaving one half-width chassis slot to house a PowerEdge FC630 for processing. This flexibility results in 2U rack servers with massive direct-attach capacity, enabling a pay-as-you-grow IT model.


Two half-width PowerEdge FD332 storage blocks and two PowerEdge FC630 server blocks in a PowerEdge FX2 chassis

FN I/O aggregators: Effective network consolidation

Designed to simplify cabling for the PowerEdge FX2 chassis, the FN I/O aggregators offer as much as eight-to-one cable aggregation, combining eight internal server ports into one external cable. The I/O aggregators also optimize east-west traffic communication within the chassis, helping greatly increase overall performance by accelerating virtual machine migration and significantly lowering overall latency.

The FN I/O aggregators include automated networking functions with plug-and-play simplicity that give server administrators access-layer ownership. Administrators can quickly and easily deploy a network using the simple graphical user interface of the Chassis Management Controller (CMC) or perform custom network management through a command-line interface. The I/O aggregators also are able to provide Virtual Link Trunking as well as uplink link aggregation.

The FN I/O aggregators support Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE) and Internet SCSI (iSCSI) optimization to enable converged data and storage traffic. By converging I/O through the aggregators, it is possible to eliminate redundant SAN and LAN infrastructures within the data center. As a result, cabling can be reduced up to 75 percent when connecting server blocks to upstream switches like the Dell Networking S5000 10/40GbE unified storage switch for full-fabric Fibre Channel breakout. Moreover, I/O adapters can be reduced by 50 percent.

PowerEdge FN410s. The FN I/O aggregator provides four ports of small form-factor pluggable + (SFP+) 1/10GbE connectivity and eight 10GbE internal ports. With SFP+ connectivity, the I/O aggregator supports optical and direct-attach copper (DAC) cable media.


PowerEdge FN410s: Four-port SFP+ I/O aggregator

PowerEdge FN410t. The FN I/O aggregator offers four ports of 1/10GbE 10GBASE-T connectivity and eight 10GbE internal ports. The 10GBASE-T connectivity supports cost-effective copper media with maximum transmission distance up to 328 feet (100 meters).

PowerEdge FN410t: Four-port 10GBASET-T I/O aggregator

PowerEdge FN2210s. Providing innovative flexibility for convergence within the PowerEdge FX2 chassis, the PowerEdgeFN2210s delivers up to two ports of 2/4/8 Gbps Fibre Channel bandwidth through N_Port ID Virtualization (NPIV) proxy gateway (NPG) mode, along with two SFP+ 10GbE ports. It also can be reconfigured with four SFP+10GbE ports through a reboot.

PowerEdge FN2210s: Four-port combination Fibre Channel/Ethernet I/O aggregator

NPG mode technology enables the PowerEdge FN2210s to use converged FCoE inside the PowerEdge FX2 chassis while maintaining traditional unconverged Ethernet and native Fibre Channel outside of the chassis. To converged network adapters, the PowerEdge FN2210s appears as a Fibre Channel forwarder, while its Fibre Channel ports appear as NPIV N_ports or host bus adapters to the external Fibre Channel fabric. This capability allows for connectivity upstream to a Dell Networking S5000 storage switch or many widely deployed Fibre Channel switches providing full fabric services to a SAN array; the PowerEdge FN2210s itself does not provide full Fibre Channel fabric services.

Read More:

Interactive data analytics drive insights

$
0
0

By Armando Acosta


The Apache Hadoop® platform speeds storage, processing and analysis of big, complex data sets, supporting innovative tools that draw immediate insights.

Big data has taken a giant leap beyond its large-enterprise roots, entering boardrooms and data centers across organizations of all sizes and industries. The Apache Hadoop platform has evolved along with the big data landscape and emerged as a major option for storing, processing and analyzing large, complex data sets. In comparison, traditional relational management database or enterprise data warehouse tools often lack the capability to handle such large amounts of diverse data effectively.

Hadoop enables distributed parallel processing of high-volume, high-velocity data across industry-standard servers that both store and process the data. Because it supports structured, semi-structured and unstructured data from disparate systems, the highly scalable Hadoop framework allows organizations to store and analyze more of their data than before to extract business insights. As an open platform for data management and analysis, Hadoop complements existing data systems to bring organizational capabilities into the big data era as analytics environments grow more complex.

Evolving data needs

Early adopters tended to utilize Hadoop for batch processing; prime use cases included data warehouse optimization and extract, transform, load (ETL) processes. Now, IT leaders are expanding the application of Hadoop and related technologies to customer analytics, churn analysis, network security and fraud prevention — many of which require interactive processing and analysis.

As organizations transition to big data technologies, Hadoop has become essential for enabling predictive analytics that use multiple data sources and types. Predictive analytics helps organizations in many different industries answer business-critical questions that had been beyond their reach using basic spreadsheets, databases or business intelligence (BI) tools. For example, financial services companies can move from asking “How much does each customer have in their account?” to answering sophisticated business enablement questions such as “What upsell should I offer a 25-year-old male with checking and IRA accounts?” Retail businesses can progress from “How much did we sell last month?” to “What packages of products are most likely to sell in a given market region?” A healthcare organization can predict which patient is most likely to develop diabetes and when.

Using Hadoop and analytical tools to manage and analyze big data, organizations can personalize each customer experience, predict manufacturing breakdowns to avoid costly repairs and downtime, maximize the potential for business teams to unlock valuable insights, drive increased revenue and more. [See the sidebar, “Doing the (previously) impossible.”]

Parlaying big data to best advantage

Effective use of big data is key to competitive gain, and Dell works with ecosystem partners to help organizations succeed as they evolve their data analytics capabilities. Cloudera plays an important role in the Hadoop ecosystem by providing support and professional feature development to help organizations leverage the open-source platform.

The combination of Cloudera® software on Dell servers enables organizations to successfully implement new data capabilities on field-tested, low-risk technologies. (See section, “Taking Hadoop for a test-drive.”)

Dell | Cloudera Hadoop Solutions comprise software, hardware, joint support, services and reference architectures that support rapid deployment and streamlined management (see figure). Dell PowerEdge servers, powered by the latest Intel® Xeon® processors, provide the hardware platform.

Solution stack: Dell | Cloudera Hadoop Solutions for big data

Dell | Cloudera Hadoop Solutions are available with Cloudera Enterprise, designed specifically for mission-critical environments. Cloudera Enterprise comprises the Cloudera Distribution including Apache Hadoop (CDH) and the management software and support services needed to keep a Hadoop cluster running consistently and predictably. Cloudera Enterprise allows organizations to implement powerful end-to-end analytic workflows — including batch data processing, interactive query, navigated search, deep data mining and stream processing — from a single common platform.

Accelerated processing. Cloudera Enterprise leverages Hadoop YARN (Yet Another Resource Negotiator), a resource management framework designed to transition users from general batch processing with Hadoop MapReduce to interactive processing. The Apache Spark™ compute engine provides a prime example of how YARN enables organizations to build an interactive analytics platform capable of large-scale data processing. (See the sidebar, “Revving up cluster computing.”)

Built-in security. Role-based access control is critical for supporting data security, governance and compliance. The Apache Sentry system, integrated in CDH, enhances data access protection by defining what users and applications can do with data, based on permissions and authorization. Apache Sentry continues to expand its support for other ecosystem tools within Hadoop. It also includes features and functionality from Project Rhino, originally developed by Intel to enable a consistent security framework for Hadoop components and technologies.

Supporting rapid big data implementations

Dell | Cloudera Hadoop Solutions, accelerated by Intel, provide organizations of all sizes with several turnkey options to meet a wide range of big data use cases.

Getting started. Dell QuickStart for Cloudera Hadoop enables organizations to easily and cost-effectively engage in Hadoop development, testing and proof-of-concept work. The solution includes Dell PowerEdge servers, Cloudera Enterprise Basic Edition and Dell Professional Services to help organizations quickly deploy Hadoop and test processes, data analysis methodologies and operational needs against a fully functioning Hadoop cluster.

Taking the first steps with Hadoop through Dell QuickStart allows organizations to accelerate cluster deployment to pinpoint effective strategies that address the business and technical demands of a big data implementation.

Going mainstream. The Dell | Cloudera Apache Hadoop Solution is an enterprise-ready, end-to-end big data solution that comprises Dell PowerEdge servers, Dell Networking switches, Cloudera Enterprise software and optional managed Hadoop services. The solution also includes Dell | Cloudera Reference Architectures, which offer tested configurations and known performance characteristics to speed the deployment of new data platforms.

Cloudera Enterprise is thoroughly tested and certified to integrate with a wide range of operating systems, hardware, databases, data warehouses, and BI and ETL systems. Broad compatibility enables organizations to take advantage of Hadoop while leveraging their existing tools and resources.

Advancing analytics. The shift to near-real-time analytics processing necessitates systems that can handle memory-intensive workloads. In response, Dell teamed up with Cloudera and Intel to develop the Dell In-Memory Appliance for Cloudera Enterprise with Apache Spark, aimed at simplifying and accelerating Hadoop cluster deployments. By providing fast time to value, the appliance allows organizations to focus on driving innovation and results, rather than on using resources to deploy their Hadoop cluster.

The appliance’s ease of deployment and scalability addresses the needs of organizations that want to use high-performance interactive data analysis for analyzing utility smart meter data, social data for marketing applications, trading data for hedge funds, or server and network log data. Other uses include detecting network intrusion and enabling interactive fraud detection and prevention.

Built on Dell hardware and an Intel performance- and security-optimized chipset, the appliance includes Cloudera Enterprise, which is designed to store any amount or type of data in its original form for as long as desired. The Dell In-Memory Appliance for Cloudera Enterprise comes bundled with Apache Spark and Cloudera Enterprise components such as Cloudera Impala and Cloudera Search.

Cloudera Impala is an open-source massively parallel processing (MPP) query engine that runs natively in Hadoop. The Apache-licensed project enables users to issue low-latency SQL queries to data stored in Apache HDFS™ (Hadoop Distributed File System) and the Apache HBase™ columnar data store without requiring data movement or transformation.

Cloudera Search brings full-text, interactive search and scalable, flexible indexing to CDH and enterprise data hubs. Powered by Hadoop and the Apache Solr™ open-source enterprise search platform, Cloudera Search is designed to deliver scale and reliability for integrated, multi-workload search.

Changing the game

Since its beginnings in 2005, Apache Hadoop has played a significant role in advancing large-scale data processing. Likewise, Dell has been working with organizations to customize big data platforms since 2009, delivering some of the first systems optimized to run demanding Hadoop workloads.

Just as Hadoop has evolved into a major data platform, Dell sees Apache Spark as a game-changer for interactive processing, driving Hadoop as the data platform of choice. With connected devices and embedded sensors generating a huge influx of data, streaming data must be analyzed in a fast, efficient manner. Spark offers the flexibility and tools to meet these needs, from running machine-learning algorithms to graphing and visualizing the interrelationships among data elements — all on one platform.

Working together with other industry innovators, Dell is enabling organizations of all sizes to harness the power of Hadoop to accelerate actionable business insights.

Joey Jablonski contributed to this article.

Doing the (previously) impossible

Apache Hadoop and big data analytics capabilities enable organizations to do what they couldn’t do before, whether that means making memorable customer experiences or optimizing operations.

Personalized content. A digital media company turned to Hadoop when burgeoning data volumes hindered its mission to simplify marketers’ access to data that would let them tailor content to individual customers. The company’s move to Cloudera Enterprise, powered by Dell PowerEdge servers, enabled complex, large-scale data processing that delivered greater than 90 percent accuracy for its content personalization services. Moreover, the 24x7 reliability of the Hadoop platform lets the company provide the data its customers need, when they need it.

Product quality management. To help global manufacturers efficiently manage product quality, Omneo implemented a software solution based on the Cloudera Distribution including Apache Hadoop (CDH) running on a cluster of Dell PowerEdge servers. Using the solution, Omneo customers can quickly search, analyze and mine all their data in a single place, so they can identify and resolve emerging supply chain issues. “We are able to help customers search billions of records in seconds with Dell infrastructure and support, Cloudera’s Hadoop solution, and our knowledge of supply chain and quality issues,” says Karim Lokas, senior vice president of marketing and product strategy for Omneo, a division of the global enterprise manufacturing software firm Camstar Systems. “With the visibility provided by this solution, manufacturers can put out more consistent, better products and have less suspect product go out the door.”

Information security services. Dell SecureWorks is on deck 24 hours a day, 365 days a year, to help protect customer IT assets against cyberthreats. To meet its enormous data processing challenges, Dell SecureWorks deployed the Dell | Cloudera Apache Hadoop Solution, powered by Intel Xeon processors, to process billions of events every day. “We can collect and more effectively analyze data with the Dell | Cloudera Apache Hadoop Solution,” says Robert Scudiere, executive director of engineering for SecureWorks. “That means we’re able to increase our research capabilities, which helps with our intelligence services and enables better protection for our clients.” By moving to the Dell | Cloudera Apache Hadoop Solution, Dell SecureWorks can put more data into its clients’ hands so they can respond faster to security threats than before.

Taking Hadoop for a test-drive

How can IT decision makers determine the best way to capitalize on an investment in Apache Hadoop and big data initiatives? Dell has teamed up with Intel to offer the Dell | Intel Cloud Acceleration Program at Dell Solution Centers, giving decision makers a firsthand opportunity to see and test Dell big data solutions.

Experts at Dell Solution Centers located worldwide help bolster the technical skills of anyone new (and not so new) to Hadoop. Participants gain hands-on experience in a variety of areas, from optimizing performance for an application deployed on Dell servers to exploring big data solutions using Hadoop. At a Dell Solution Center, participants can attend a technical briefing with a Dell expert, investigate an architectural design workshop or build a proof of concept to comprehensively validate a big data solution and streamline deployment. Using an organization’s specific configurations and test data, participants can discover how a big data solution from Dell meets their business needs.

For more information, visit Dell Solution Centers

Revving up cluster computing

The expansion of the Internet of Things (IoT) has led to a proliferation of connected devices and machines with embedded sensors that generate tremendous amounts of data. To derive meaningful insights quickly from this data, organizations need interactive processing and analytics, as well as simplified ecosystems and solution stacks.

Apache Spark is poised to become the underpinning technology driving the analysis of IoT data. Spark utilizes in-memory computing to deliver high-performance data processing. It enables applications in Hadoop clusters to run up to 100 times faster than Hadoop MapReduce in memory or 10 times faster on disk. Integrated with Hadoop, Spark runs on the Hadoop YARN (Yet Another Resource Negotiator) cluster manager and is designed to read any existing Hadoop data.

Within its computing framework, Spark is tooled with analytics capabilities that support interactive query, iterative processing, streaming data and complex analytics such as machine learning and graph analytics. Because Spark combines these capabilities in a single workflow out of the box, organizations can use one tool instead of traditional specialized systems for each type of analysis, streamlining their data analytics environments.

Learn More:

Hadoop Solutions from Dell

Dell Big Data

Hadoop@Dell.com 

A Single Approach to Anypoint Systems Management — Or Would You Rather Have More Complexity?

$
0
0

What’s systems management like in the age of mobility and BYOD? 

Systems management isn’t just about PCs and servers, the way it used to be, but about smartphones, tablets, laptops and other types of smart, connected devices.

Most IT administrators will tell you they feel as though they have too many devices to manage and too few good tools with which to manage them.  That’s because the tools you’ve been using for systems management don’t address these new types of devices.  So you turn to point solutions — one for smartphones, another for non-computing devices, another for virtual machines — until the effort of jockeying multiple interfaces and reconciling multiple report formats makes it seem like more work than it’s worth.  The result is an increasing amount of complexity and resource drain that goes along with so many devices and so few good tools.

What if your existing systems management solution could provide a single, at-a-glance view of traditional devices like PCs and servers and extend it to mobile, BYOD and other network-connected devices? What if you could employ a true anypoint management solution?

New perspective, new e-book

To help you go from more complexity to less complexity in your systems management, we've put together a new e-book titled “A Single Approach to Anypoint Systems Management.” Part 1 of the e-book described the issues faced by IT administrators like you in managing more complex environments that now include smartphones, tablets, laptops and other types of smart, connected devices.

Part 2 of the e-book sets the stage for Dell’s approach to managing PCs, servers, smartphones, tablets and network-connected devices all from the same tool and is subtitled “Systems Management without Missing a Beat: A Model for Integrating Anypoint Device Control.” 

This series of posts will describe Dell’s viewpoint on “Anypoint Systems Management” in greater detail. Meanwhile, you can read Part 2 of the e-book right now and begin gauging the fit within your own organization.

Efficiently manage 30,000 desktops and laptops — while saving $100,000?

$
0
0

How many desktops and laptops does your institution have to manage? How much time does it take? Making sure you have an accurate inventory is only the beginning. You also need to keep them updated with the latest operating system, browser and application fixes to ensure security. And you need to be able to troubleshoot problems that inevitably arise.

Now imagine having 30,000 machines to manage — and a public school system budget to do it with.

That was exactly the challenge facing Seminole County Public Schools. But by replacing its assortment of ineffective asset management, issue tracking and machine imaging tools with two Dell KACE appliances, the district enabled easy management of its 30,000 desktops and laptops – and saved $100,000 over three years.

With the Dell KACE K1000 Systems Management Appliance, Seminole County Public Schools always has an accurate inventory of its assets and can keep all the machines properly updated to protect students’ and staff computers, as well as the network. And the K1000’s integrated service desk enables users to submit help desk tickets reports online instead of contacting IS staff individually, so the tickets can be prioritized and resolved efficiently.

The K2000 Systems Deployment Appliance complements the K1000 by simplifying disk imaging and deployment, enabling IS staff to create and maintain a standard imaging platform across the district.

But wouldn’t a comprehensive solution like this swallow a public school district’s budget entirely? Not at all. By replacing three separate systems and eliminating the need to purchase a significantly more expensive solution, the K1000 and K2000 actually saved Seminole County Public Schools at least $100,000 over three years. Plus, the appliances have a smaller footprint, so the district needs less physical space and has reduced its energy costs.

To learn more, register for a live webcast where IT and network experts from Seminole County Public Schools will explain how the district leverages the KACE K1000 and K2000 to enable the forward-thinking initiatives that help make it one of the most highly performing school districts in the state while also ensuring security and reducing costs.


A Single Approach to Anypoint Systems Management — Dell KACE + Dell Enterprise Mobility Management

$
0
0

 

How are you dealing with the proliferation of devices in your networks due to mobility and BYOD?

We looked at the complexity our customers faced with the explosion in mobile devices, multiple operating systems and intelligent network-connected endpoints. We also considered the spectrum of ownership that stretches from entirely corporate-owned assets (remember the good old days?) to personally purchased and owned devices (the BYO that’s swamping your organization these days).

The market requirement became obvious: IT administrators want a single product that brings all of these endpoints and ownership models under one overall systems management umbrella. Existing products seemed to bolt mobile device management (MDM) onto an existing systems management tool, but the integration wasn’t very good. They didn’t go far enough toward addressing the biggest concern in this area:

How can you extend comprehensive systems management to mobile and BYO devices without negatively impacting end-user productivity or privacy concerns?

Dell KACE appliances and Dell Enterprise Mobility Management

Our answer was to integrate the Dell KACE K1000 Systems Management Appliance with Dell Enterprise Mobility Management (EMM), offering not only a single view of traditional, mobile and BYO devices, but also full systems management of those devices in an integrated solution set.

The K1000 gives you inventory and discovery, software distribution, asset management, security vulnerability detection and remediation, service desk and reporting for PC’s, servers, Chromebooks and network-connected, non-computer devices — everything you need for systems management. Dell EMM can manage smartphones and tablets, regardless of whether the organization or the employee owns them, and integrates mobile device information into the K1000 to provide unified asset management for all connected systems and devices.

Anypoint systems management from Dell gives you insight into and control over all network-connected devices as they continue to grow in number and variety.

Don’t you want that level of integration in systems management for all the devices in your organization? I’ll have more details in my next post. Meanwhile, read Part 2 of our new e-book, “A Single Approach to Anypoint Systems Management,” right now.

Why do you need a systems management solution and which one is the best for you? – New White Paper from EMA has the answers

$
0
0

IT environments are becoming increasingly more diverse and complex, and consequently harder to manage. Mobility, along with more and more smart devices (i.e., the Internet of Things) has led to a significant increase in the number and types of devices that are connected to corporate networks; devices that you must now track, manage and secure. If your organization previously felt that it could get along without a comprehensive systems management strategy, you are likely now feeling the pressure to find a comprehensive “anypoint” systems management solution, one that is both easy to use and addresses all of your concerns.

Whether you’re in the market for your first systems management lifecycle solution or you’re looking to upgrade your existing one, you’ll find a valuable resource in “Best Practices in Lifecycle Management,” the new comparative paper from Enterprise Management Associates, Inc. (EMA). I’ll cover highlights of the paper in this series of blog posts, starting with this post on the most important criteria for evaluating systems management products.

Covering the main disciplines in systems lifecycle management

The systems lifecycle management spans all the processes you need to get the most productivity and efficiency out of your IT spend. These processes have evolved and expanded over time and you need to address all of them if you want to sleep soundly at night without worrying about compliance violations, security vulnerabilities, managing the ever-increasing number of mobile and non-computer devices on your network, or lost productivity due to system performance issues:

  • Asset Management – Discovering hardware and software assets and managing the asset lifecycle
  • Software Distribution and Provisioning – Installing software from a central location
  • Security and Patch Management– Automating patching and security management
  • Configuration Compliance and Remediation – Centralizing systems configuration and enforcement of policies
  • Process Automation – Automating and integrating IT management processes
  • Service Desk – Providing a service desk, integrated with asset management and reporting
  • Reporting – Comprehensive and customizable reporting and alerts

In light of those and other criteria, EMA conducted extensive research and evaluated four of the most popular lifecycle management solutions:

  • Dell KACE K1000 Systems Management and K2000 Systems Deployment Appliances
  • LANDESK Management Suite 9.6 SP1
  • Microsoft System Center 2012 R2 Configuration Manager (SCCM)
  • Symantec Altiris Client Management Suite (CMS) 7.5 SP1

You’ll take away a clear picture of how to shop for lifecycle management products. You’ll see how to start narrowing down the checklist of features and capabilities that will make the biggest difference in managing the desktops, laptops, servers and non-computer devices in your organization.

Download the Tech Brief

Get your free copy of “Best Practices in Lifecycle Management” for EMA’s detailed comparison of over 45 features across seven main areas of lifecycle management.

A Single Approach to Anypoint Systems Management — Balancing Privacy and Productivity

$
0
0

How are you keeping your end users happy and productive in the age of mobility and BYOD?

In my last post, I described the Dell KACE-Dell Enterprise Mobility Management combination for bringing all of your organization’s traditional, mobile and BYO devices under one umbrella in a single systems management product.

“That’s all right for the devices,” you say, “but what about the people using them? How does Dell KACE-Dell EMM deal with issues of privacy and productivity?”

Fair enough. After all, what good is it to address IT’s concerns for tracking and managing all connected devices if it only leads to resistance among users worried about privacy on their personally owned devices?

Balancing privacy and productivity

Don’t forget that the main reason for putting all your blood, sweat and tears into a BYOD initiative is that users can be more productive when they use their own devices. The freedom to use the corporate network as productively as possible lets users take advantage of the apps, devices and means of communication they know best. But in the same way that users expect the organization to respect personal privacy, they also expect it to respect the privacy of personal data on their own devices.

The balance, then, is to separate business and personal use clearly and securely right on the device. The Dell KACE-Dell EMM combination for anypoint systems management strikes that balance with the Dell Mobile Workspace app that users install on their own smartphone or tablet and the Dell Desktop Workspace application for their BYO laptop. Other than installing the secure workspace on their personal device, where they can securely access corporate resources and do their work, users don’t see any difference in their devices. IT has access only to the workspaces on their personal devices and users can be assured that personal apps and data are safely partitioned and completely inaccessible by IT. Workspaces also assure IT of the same level of security desired for corporate-owned assets, with features like encryption, secure remote access, firewall and prevention of data loss for the corporate information residing within the workspace on a user-owned device.

In my next post I’ll have more details on how anypoint systems management looks from the IT perspective, so stay tuned. Meanwhile, have a look through Part 2 of our new e-book, “A Single Approach to Anypoint Systems Management,” right now.

Effective Systems Management in a Multi-Platform World — New Tech Brief

$
0
0

If variety is the spice of life, then why does it cause such headaches for IT?

Most IT administrators walk into heterogeneous computing environments every day knowing they’ll have to manage computers and servers running several flavors of Windows, Mac OS X, UNIX, Linux and, more recently, Chrome OS. Efficiently managing such multi-platform environments is not easy. Tasks like keeping track of all hardware and software, making sure you are in compliance with applicable regulations and licensing agreements, ensuring security, and providing exceptional support become increasingly difficult as your environment becomes more complex.

Systems management with multiple operating systems

In my last post I introduced “Best Practices in Lifecycle Management,” a new position paper from Enterprise Management Associates, Inc. (EMA), and outlined the many IT disciplines that lifecycle management covers, if you’re doing it correctly.

Strong support for heterogeneous computing environments is a must-have in a systems management product, and the paper includes a section specifically identifying the OSes supported by each of the four most popular systems lifecycle management solutions on the market:

  • Dell KACE K1000 Systems Management and K2000 Systems Deployment Appliances
  • LANDESK Management Suite 9.6 SP1
  • Microsoft System Center 2012 R2 Configuration Manager (SCCM)
  • Symantec Altiris Client Management Suite (CMS) 7.5 SP1

If you spend your workday managing a multi-platform environment, then you’ll find EMA’s checklist a valuable resource in preparing for your next investment in a systems lifecycle management product.

Get a free copy of the tech paper and have a look at the section “Heterogeneous Support” on page 5 of the PDF.

A Single Approach to Anypoint Systems Management — Unifying Your View of the Environment

$
0
0

You really can achieve a single view of your entire connected environment.

Throughout this series of posts I’ve emphasized that full control with a single, overall view is the promised land for systems management in the age of mobility, BYOD and the Internet of Things (IoT). It’s the only way you can keep the corporate data on your users’ personally owned smartphones, tablets and PCs as secure as the datat on your corporate-owned computers and servers, while also managing a host of new, network-connected, non-computer devices.

The combination of the KACE K1000 Systems Management Appliance and Dell Enterprise Mobility Management (EMM) can help you reach the promised land of anypoint systems management without the patchwork of point solutions you’ve likely accumulated. As shown in the diagram, the Dell KACE K1000, through its integration with Dell EMM, provides you with a single, integrated view of your entire environment – corporate-owned devices as well as secure workspaces:

  1. Systems management of traditional devices like desktop PCs, laptops, Macs and servers, as well as network-connected non-computer devices with the K1000
  2. Systems management of all mobile devices, corporate-owned as well as BYOD, by EMM with integration of all asset data into the K1000
  3. Systems management of the secure workspaces within BYO PCs

Did we leave any devices out? We don’t think so. The Dell KACE-Dell EMM solution gives IT administrators the control and insight they need, while giving users the privacy they want and the freedom to be productive using their own devices.

Next steps

Once you’ve read Part 2 of our e-book, “A Single Approach to Anypoint Systems Management,” you’ll want to see a live web demo of the Dell KACE K1000 Management Appliance and Enterprise Mobility Management (EMM) solution. You can also take your own tour and interact with a live KACE K Series Appliance in our Demo Sandbox.

Analyzing the Costs of Systems Lifecycle Management — New Tech Paper

$
0
0

Are you tired of the manual effort to manage your IT environment or the redundancy of using multiple point solutions? Are the risks of a security breach or the costs of lost productivity becoming too high to ignore? Is your existing systems management solution too complicated to use or too expensive to upgrade? Whether you are looking to invest in a systems lifecycle management solution for the first time or looking to replace your existing solution, be sure you consider all financial components.

To continue my series of posts on “Best Practices in Lifecycle Management,”  a new comparison paper from Enterprise Management Associates, Inc. (EMA), I want to highlight the Financial Comparison section of the paper. It shows you not only the dollars you’ll spend on the solution itself, but also the other costs involved in executing a successful systems lifecycle management implementation.

You’ll need to take several types of costs into account:

  • Pricing model — This includes the list price, which varies from one-size-fits-all to graduated pricing tailored to the size of your organization. It also includes the annual maintenance costs to keep the product up to date.
  • Infrastructure costs — How much additional hardware and software will you need to support your systems lifecycle management solution? This too varies widely from solution to solution, with the number of server-plus-OS configurations, client access licenses, database licenses and maintenance contracts you have to put in place.
  • Operational and training cost — Systems lifecycle management products that require more infrastructure will almost certainly require more people to administer that infrastructure. How many more people will you need to add to your IT staff? What will it cost to train them? Is there a way to get around adding staff?
  • Non-computer device support — With more smart, non-computer devices connecting to your network, lifecycle management now extends beyond PCs and servers to devices like printers, projectors, scanners and even universal power supplies. If you choose a product that supports those devices, your cost calculations will have to include them as well.

For each of these costs, the paper builds out a total implementation cost based on size of organization, number of servers, length of maintenance contract, number of locations and other variables. It’s easy to follow and it’s designed to help you pencil out your own costs.

Get your free copy of “Best Practices in Lifecycle Management” and have a look at pages 14 through 18 in the PDF for EMA’s detailed cost comparison of four products, including Dell KACE Appliances.

Where is the Microsoft OEM Server OS Certificate of Authenticity (COA) label located on Dell PowerEdge servers?

$
0
0

This blog is originally written by Sushma Venkatesh from Dell Windows Engineering Team

What is the Certificate of Authenticity for Windows Server Operating System?

A Microsoft Certificate of Authenticity (COA) is a label to help you identify genuine Microsoft Windows software.  All OEM Windows Server operating system orders must contain a COA label.  Without it you will not have a legal license to run your Windows software.

The COA label also contains the operating system's product activation code or key, which may be needed in the event the operating system needs to be reinstalled, or for virtual machine instances. Some products contain two keys (Windows Server 2008 R2 and older); 1 each for physical and virtual instances. Windows Server 2012 and newer use the same key for both.

Figure 1: Certificate of Authenticity (COA) Label for Microsoft Windows Server Operating System

A COA label cannot be purchased, sold, or distributed separate from the software it authenticates.  (The Anti-Counterfeiting Act of 2003 (U.S. only), passed on December 24, 2004, makes it a criminal offense to distribute stand-alone COA labels.)

Where are OEM OS COAs located on Dell PowerEdge servers?

 For pre-installed Microsoft Windows Server Operating Systems, the COA label is affixed by Dell to the servers in our factory*.  A COA label may also be applied to a server by Dell’s channel partners as part of a Reseller Option Kit (ROK). 

To locate your Dell OEM OS COA, please look in the following locations: 

PowerEdge rack servers, the COA is adhered to the top of the server chassis (see Figure 2);

PowerEdge tower servers, the COA is adhered to the back of the server chassis (see Figure 3);

PowerEdge blade/Modular servers, the COA is adhered to the top of the server blade (see Figure 4).

If you have concerns that your server, ordered with an OEM Windows Server OS, is missing the COA label, please be prepared to submit photographs of the system when contacting Dell for assistance.

* For Certain Dell Server Products, which needs multiple COAs, primary COA is affixed to the server and the next set of COA labels are sent as Drop-in-Box along with the other license terms.

Figure 2. Top of Dell PowerEdge Rack Server (R920)

 

 On a Dell PowerEdge Rack server, the COA label will be adhered to the top of the server chassis

Figure 3. Back of Dell PowerEdge Tower Server (T630)


On a Dell PowerEdge tower server, the COA will be adhered to the back of the server chassis

 

Figure 4. Top of Dell PowerEdge Blade Server (M620)

On a Dell PowerEdge blade server, the COA label will be adhered to the top of the server blade 


Asset Management Without Agents — New Tech Brief

$
0
0

Everybody in IT wants systems management. Traditionally, that has been accomplished by installing an agent on a managed system. However, not all systems or devices can support an agent. Some systems, such as electronic kiosks, ATMs or point-of-service devices are locked and simply do not allow installation of any additional software, including an agent from a systems management solution. And many smart, connected non-computer devices, such as printers or scanners, do not allow for installation of an agent.

And, not everybody wants to install an agent on all of their systems. Justifiable or not, there is a perception in some IT organizations that the installation of a systems management agent can negatively impact performance. This results in reluctance to install agents on some systems, primarily on mission-critical production servers.

How does your IT team come down on this issue? Solutions that provide agentless capabilities can expand the scope of your systems management to more devices without worrying about performance concerns.

Asset Management Without Agents

Once they have discovered your assets, most systems management products can perform inventory and tracking without an agent (“agentlessly”); discovering the assets without an agent, however, is another story.

Agentless systems management is easier for organizations that are reluctant to install an agent yet still want complete asset lifecycle management: hardware and software inventory, reporting, service desk, etc. And, of course, you need agentless systems management if you want to discover and monitor non-computer assets like your printers, network devices and storage devices.

Agentless management connects over SSH, Telnet, SNMP and other protocols to collect device information and report inventory. It’s useful if you’re running unusual OS versions/distributions, or if your organization simply prefers to manage its systems without installing an agent.

A new comparison paper from Enterprise Management Associates, Inc. (EMA), called “Best Practices in Lifecycle Management,” provides guidance on the agentless asset management capabilities of leading systems management solutions and is a valuable guide for any IT team planning to implement a new systems management product or upgrade an existing one. Download a free copy of the paper and have a look at page 5 of the PDF for a longer discussion about agent-based vs. agentless discovery and inventory management.

New Printer QRL Support Codes

$
0
0
Hi Guys! We've got a great new support library tool that may just save you a bunch of time troubleshooting your Dell printers. We know it can be difficult to troubleshoot printers, having to run back and forth between your system and the...(read more)

Getting Cluster Logs remotely using PowerShell

$
0
0

Today PowerShell has become increasingly important in managing severs and you can use both in-band and out-of-band mechanisms that support PowerShell management for automating administration, monitoring and troubleshooting systems.

PowerShell has a bunch of robust modules that allow getting a lot of troubleshooting information. When working with clusters, especially while debugging, it gets very important to get all the logs that can be retrieved. PowerShell allows managing and troubleshooting the entire cluster. If you want to monitor a cluster remotely, you may be able to run most Cmdlets. However in case where you want to, say retrieve cluster logs, it will not let you perform this operation directly. The issue here is that this command needs to be executed on all the nodes in the cluster.

So in my example I tried the following to start with:

$sessionfirst= New-PSSession -ComputerName ClusNode1 -Authentication Kerberos

Invoke-Command -Session $sessionfirst {Get-ClusterLog}  

 

This command returns back with an error:

Failed to generate the cluster log on node ClusNode2. Access is Denied.

This happens because though the systems are in the same domain and Kerberos authentication is setup for the session; this particular command (for retrieving the logs) requires authenticating with all the servers in the cluster. Though we setup a session to ClusNode1 the session does not set up a remote session to the other systems. This is a security mechanism to avoid inadvertent accesses.

To retrieve the cluster logs remotely using PowerShell automation, you would need to use CredSSP. “CredSSP is a Security Service Provider that allows you to authenticate in a remote machine creating a logon session that contains the user credentials.”

 

So on the client system, you need to enable the client role and provide the name of the delegate computer. You need to also enable the WSMAN CredSSP server role in the second server thus giving it permissions to use the credentials delegated to it. 

 

$session=New-PSSession -ComputerName $serverFQDN -Authentication Credssp -Credential DomainName\Administrator      

Invoke-Command -Session $session {Get-ClusterLog}

OUTPUT:

Mode           LastWriteTime     Length        Name        PSComputerName                

----           -------------     ------        ----          --------------                

-a----         4/2/2015   6:26 PM   20794868 Cluster.log   ClusNode1.DomainName.local        

-a----         4/2/2015   6:26 PM   25370434 Cluster.log   ClusNode1.DomainName.local

The attached PowerShell script depicts the complete scenario of retrieving Cluster Logs from a 2 Node cluster.

Additional Resources

Managing Dell PowerEdge VRTX using Windows PowerShell

 

SUSE Enterprise Linux 12 and Dell PowerEdge Servers

$
0
0

This blog is originally written by Smitha Honnayakanaha and Gobind Vijayakumar from Dell OS Engineering Team

We are thrilled to announce the availability of SUSE Enterprise Linux 12 on DELL PowerEdge Servers. In November last year SUSE has announced the availability of their next major distribution release SUSE Linux Enterprise Server (SLES 12). 

With an extensive set of platform enablement and a comprehensive testing with all relevant drivers, SUSE Enterprise Linux 12 delivers a superior customer experience on DELL PowerEdge Servers. SLES 12 is based on upstream Linux 3.12 kernel and OpenSUSE 13 and later. SLES 12 includes driver updates, bug fixes and support for new hardware, refer to SLES 12 Release Notes for more info. 

SLES 12 has been extensively validated on all supported Dell PowerEdge servers.

With Dell and SUSE Enterprise Linux 12, here are some of the hardware enablement features that will interest you to install SLES 12 on DELL PowerEdge Servers

  • Intel Haswell EP Processor support
  • Intel Avoton Processor support
  • Support for Dell PERC 9 storage controllers
  • Support for PCIe SSD and NVMe

SUSE Linux Enterprise Server 12 introduces a number of innovative changes. Here are some of the highlights: Made for Cloud

  • Robustness on administrative errors and improved management capabilities with full system rollback based on btrfs as the default file system for the operating system partition and SUSE's snapper technology
  • An overhaul of the installer introduces a new workflow that allows you to register your system and receive all available maintenance updates as part of the installation
  • New core technologies like systemd (replacing the time honored System V based init process) and wicked (introducing a modern, dynamic network configuration infrastructure)
  • Support for the open-vm-tools together with VMware for better integration into VMware based hypervisor environments
  • Linux Containers are integrated into the virtualization management infrastructure (lib-virt). Docker is provided as a technology preview

For a full list of features introduced with SLES 12, refer link

SLES 12 DELL Offerings

  • With SLES 12 RTS from DELL, SLES 12  is Factory-Installed (FI) and DIB (Drop-In-Box) supported on 11G, 12G and 13G servers
  • DELL Systems Management OpenManage (OM) 8.1 is the supported OM version for SLES 12 on Power Edge Servers

Certification Highlights

  • SUSE Enterprise Linux 12 has been released to general availability (GA) and with this announcement we are eager to reveal the list of Dell Server that will support this new release. These servers have under gone rigorous testing and support most of the SLES 12 features
  • Below is the list of our 11th, 12th and 13th Generation servers that officially supports SUSE Enterprise Linux 12

Dell PowerEdge Servers

Generation

Server Model

11G

T110 II

R710

R610

R910

M910

M915

R415

R515

R715

R815

12G

PE R720

PE R720xd

PE R620

PE T620

PE M620

PE R820

PE M520

PE M420

PE R420

PE R320

PE R520

PE T320

PE T420

PE M820

PE R920

R220

FM120x4

13G

R730

R730 XD

R630

T630

M630

FC630

R530

R430

T430

FC430

PowerEdge-C

PE-C

C6105

C6145

C6220 II

C8220

References –

  1. SuSE Support – http://www.suse.com/products/server/services-and-support/
  2. Link to YES Certified Dell Servers – http://tinyurl.com/ka4yfc8  

Express Flash NVMe PCIe SSD support in VMware ESXi

$
0
0

This blog post is originally written by Karan Singh Gandhi and Krishnaprasad K from Dell Hypervisor Engineering team

What's Express Flash NVMe PCIe SSD?

NVMe (Non-Volatile Memory Host Controller Interface) is a specification for accessing Solid State Devices (SSDs) connected through PCI Express (PCIe) bus. The Dell PowerEdge Express Flash PCIe-SSD is based on NVMe specification and is a high-performance storage device built with enterprise-class NAND flash designed for solutions requiring low latency, high input/output operations per second (IOPS) and enterprise-class storage reliability and serviceability. The Express Flash PCIe-SSD is a PCIe Gen2/Gen3-compliant device (depending on server type) that can be configured as storage cache or as a primary storage device in demanding enterprise environments, such as enterprise blade and rack servers, video-on-demand servers, web accelerators and virtualization appliances.

Express Flash NVMe PCIe SSD support from Dell with respect to VMware ESXi

  • Express Flash NVMe PCIe SSD is supported from Dell PowerEdge 12th generation of servers onwards. The list of servers supporting Express Flash NVMe PCIe SSD are as follows:-
    • PE R620
    • PE R720
    • PE T620
    • PE R820
    • PE M620
    • PE R920
    • PE R630
    • PE R730xd
    • PE T630
    • PE M630
    • PE FC630
    • PE FC830
    • PE M820
    • PE M830

  • The Express Flash NVMe PCIe SSD is supported from VMware ESXi 5.5 Update2 onwards. However the driver availability in Dell customized VMware ESXi (OR) VMware native ESXi image starts from ESXi 6.0 release onwards only.
  • For enabling Express Flash NVMe PCIe SSD on VMware ESXi 5.5.x branch, ensure to install the async driver available from VMware download page.
  • Refer to Express Flash NVMe PCIe SSD user’s guide which detail about the technical specifications and few aspects such as configuring and managing NVMe PCIe SSD

NVMe use cases in VMware ESXi

  • NVMe PCIe SSD as a VMFS datastore
    • NVMe PCIe SSD can be formatted as a Virtual Machine File System (VMFS) datastore and the virtual machines can be placed on top of this for a better throughput
  • Passthrough of NVMe device(s) to the virtual machines
    • It’s possible to passthrough the NVMe device directly to the virtual machines running on ESXi. In this case, the guest operating system should have the NVMe driver installed and loaded to the kernel to make use of the device.
    • This use case would be helpful if there is a specific read/write intensive application to be run from the guest operating system.
  • NVMe PCIe SSD as Virtual Flash
    • There are two parts in virtual flash aka vFlash, NVMe PCIe SSD device can be used in any mode or both together
      • SSD as Virtual Flash Host Swap Cache: Used by hypervisor for Host Memory-Caching purposes. In this case, the hypervisor shares the SSD device amongst all the virtual machines
      • SSD as Virtual Flash Read Cache: Used by VMs for caching the virtual machines I/O requests. With this option, there is a dedicated space provided for the virtual machine’s vmdk.
  • NVMe PCIe SSD hot plug
    • NVMe PCIe SSD supports hot plug. It means that the device can be removed or added on the fly even when the operating system is up and running.
    • Note that hot removing the NVMe PCIe SSD when the device is in use is NOT supported. The device has to be safely removed first prior to do physical hot removal of the device
  • RDM creation from SSD device
    • Refer to VMware KB 1017530 to know more about creating RDM for the local drive which can include Express Flash NVMe PCIe SSDs as well

References

Viewing all 1001 articles
Browse latest View live


Latest Images