Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Dell Open Source Ecosystem Digest: OpenStack, Hadoop & More 9-2012

$
0
0

This week’s OpenStack highlight: enStratus: Cloud DevOps: Achieving Agility Throughout the Application Lifecycle” Whitepaper. Due to Christmas Holidays today’s issue is shorter than usual, nevertheless: Enjoy reading!

OpenStack

enStratus: Cloud DevOps: Achieving Agility Throughout the Application Lifecycle” Whitepaper
www.enstratus.com/devops-wp

Mirantis: “NEW: 2013 Boot Camp Training for OpenStack; plus Do-it-Yourself Assist Services; and more” http://www.mirantis.com/training/

Mirantis: “Questions and Answers: Storage as a Service with OpenStack Cloud” by Greg Elkinbard http://www.mirantis.com/blog/questions-and-answers-about-storage-as-a-service-with-openstack-cloud/

OpenStack Foundation: “OpenStack Community Weekly Newsletter (Dec 14-21)”
http://www.openstack.org/blog/2012/12/openstack-community-weekly-newsletter-dec-14-21/

Rackspace: “Cloud Predictions For 2013” by John Engates
http://www.rackspace.com/blog/cloud-predictions-for-2013/

Dell

“Lean Process’ strength is being Honest and Humble” Rob Hirschfeld - Personal Blog
http://robhirschfeld.com/2012/12/24/lean-process-is-honest/

“I respectfully disagree – we are totally aligned on your lack of understanding” by Rob Hirschfeld - Personal Blog
http://robhirschfeld.com/2012/12/26/agree-or-disagree/

“Crowbar 2 Planning 20 Dec 2012” by Rob Hirschfeld - Personal Youtube Channel
http://www.youtube.com/watch?v=PAXW35e3ApM&feature=youtube_gdata

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com


Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #9

$
0
0

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. Currently, there are only a few MVPs contributing to this compilation, but more will follow soon. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!


Featured Posts of the Week!

Configure Storage Resource Pool in Windows Server 2012 Hyper-V by Lai Yoong Seng

Configure Ethernet Resource Pool in Windows Server 2012 Hyper-V by Lai Yoong Seng

#PSTip Most frequently used -Verb values by David Moravec

MS Christmas present: SMI-S Storage Provider for iSCSI Target Server by Hans Vredevoort


Hyper-V

Configure Storage Resource Pool in Windows Server 2012 Hyper-V by Lai Yoong Seng

Configure Ethernet Resource Pool in Windows Server 2012 Hyper-V by Lai Yoong Seng

Windows Server Core 

MS Christmas present: SMI-S Storage Provider for iSCSI Target Server by Hans Vredevoort

System Center 2012 Data Protection Manager

Hyper-V Protection with Data Protection Manager 2012 SP1 by Hans Vredevoort

PowerShell

#PSTip Most frequently used -Verb values by David Moravec

Friday Fun: Job Announcer by Jeffery Hicks

#PSTip The easy way to get allowed values by Aleksandar Nikolic

Microsoft Azure

Backup SQL Database to Windows Azure Online Backup by Lai Yoong Seng


Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management
Kristian Nese - MVP for System Center Cloud and Datacenter Management
Ravikanth Chaganti - MVP for PowerShell
Jan Egil Ring - MVP for PowerShell
Jeffery Hicks - MVP for PowerShell
Keith Hill - MVP for PowerShell
David Moravec – MVP for PowerShell
Aleksandar Nikolic - MVP for PowerShell
Marcelo Vighi - MVP for Exchange
Lai Yoong Seng - MVP for Virtual Machine
Rob McShinsky - MVP for Virtual Machine
Hans Vredevoort - MVP for Virtual Machine
Leandro Carvalho - MVP for Virtual Machine
Didier van Hoye - MVP for Virtual Machine
Romeo Mlinar - MVP for Virtual Machine
Aidan Finn - MVP for Virtual Machine
Carsten Rachfahl - MVP for Virtual Machine
Thomas Maurer - MVP for Virtual Machine
Alessandro Cardoso - MVP for Virtual Machine
Ramazan Can - MVP for Cluster
Robert Smit – MVP for Cluster
Marcelo Sinic - MVP Windows Expert-IT Pro
Ulf B. Simon-Weidner - MVP for Windows Server – Directory Services
Meinolf Weber - MVP for Windows Server – Directory Services
Nils Kaczenski - MVP for Windows Server – Directory Services
Kerstin Rachfahl - MVP for Office 365

No MVP but he should be one

Jeff Wouters - PowerShell

Microsoft Masterminds Episode 6: Marcelo Sinic, MVP (Most Valuable Professional) Windows IT Pro Expert from Brasil

$
0
0

Welcome to the new episode of tech talks with outstanding Microsoft community members from all over the world. Most interviews are with Microsoft Most Valuable Professionals (MVPs), and if you are not familiar with that program yet, I recommend you reading my recent introductory interview. In this episode I talk with Marcelo Sinic, a Brasil based MVP, Windows IT Pro Expert. Enjoy reading!

Editorial processing done by Rafael Knuth
 

Readable Interview:

Flo: Marcelo could you introduce yourself to the community?

Marcelo: My name is Marcelo Sincic, I work with TI since 1988. My first job was Clipper and DBase II programmer. I never worked with other area or other speciality.

Flo: What is your focus area as MVP and how did you get started?

Marcelo: My MVP is Windows IT Pro Expert and I started in this important professional program in 2010 and now is my third year.

Flo: You are part of the Dell Infrastructure Consulting service. Please tell us about us your daily work.

Marcelo: I am responsible to indicate and implement solutions for large and public clients in Brazil. In my team, I work as principal consultant in Microsoft technologies, focusing on virtualization, cloud and System Center suite. I contact clients before sales soes in order to determine solution and implement a solution after purchase.

Flo: In the past you handled very special and challenging projects. What was your most challenging one and why?

Marcelo: In 2011 I participated in project for virtualization in a large enterprise in Brazil with a global presence and 62 localizations. Their activity is mineral extraction and locations are far away from major cities. This project is a challenge because the size of a project and distance for implementations. But the project was completed sucessfully.

Flo: You are also implementing Microsoft System Center Products at customer sites. In your opinion, what are the strengths and weaknesses of the new System Center 2012 bundle?

Marcelo: The old System Center versions are great products, but not integrated with others. Now, System Center 2012 Suite is integrated by use of Service Manager (SCSM) and Orchestrator. With two products I do SCCM talk with SCOM, VMM and others. For example, I create a Runbook in Orchestrator to set maintenance mode of Virtual Machine in SCOM, open incident in SCSM, stop machine in VMM, install updates from SCCM, start machine in VMM, set normal state in SCOM and close incident in SCSM. This Runbook is created with visual interface, permit use OS variables and integrate with SCSM to receive data for activities. This is a great and easy construction example.

Flo: Do you have a favorite System Center Product e.g. Orchestrator or Operations Manager?

Marcelo: Yes, my favorite is a Orchestrator, which is a new product in suite. But I like Configuration Manager, Operation Manager and Virtual Machine Manager as well.

Flo: Can you explain us why?

Marcelo: Orchestrator transforms individual System Center products into real integrated suite. Configuration Manager, Operation Manager and Virtual Machine Manager is essential to administer enviroments.

Flo: What is your favorite feature in this product?

Marcelo: Orchestrator Integration Packs is simple to implement with advanced functions. Runbook Designer is great tool for design a runbooks, with simple visual and easy construction.

Flo: How will SP1 for System Center 2012 affect your daily work?

Marcelo: I expect SP1 to delivery System Center 2012 in Windows 2012 operating system and SQL Server 2012. In addition, support of VMM agents for Windows 2012 is essential too.


Contact information:

Twitter: @marcelosincic
LinkedIn: http://www.linkedin.com/in/marcelosincic
Blog: http://www.marcelosincic.com.br/blog/

AMD Opteron 6300 and 4300 series server processors on enterprise Linux operating systems

$
0
0

AMD recently announced the release of AMD Opteron™ 6300 and AMD Opteron™ 4300 series server processors.

Several Dell PowerEdge servers have been refreshed with the new AMD Opteron™ series processors. AMD Opteron™ 6300 series server processors refreshed on PowerEdge R815, R715 and M915 servers; AMD Opteron™ 4300 series server processors refreshed on PowerEdge R415 and R515 servers.

Dell Linux engineering in partnership with Red Hat and SUSE has validated and certified supported Dell PowerEdge servers with AMD Opteron™ 6300 and 4300 series processors.

Listed are the certified enterprise Linux operating systems on Dell PowerEdge servers.

Dell PowerEdge Server

Processor refresh

Minimum BIOS Version

RHEL 5.x

RHEL 6.x

SLES 11 SPx

PowerEdge R815

AMD Opteron™ 6300 series server processors

V3.0.4

RHEL 5.8 x86 & x86_64

RHEL 6.3 x86_64

SLES 11 SP2 x86_64

PowerEdge R715

AMD Opteron™ 6300 series server processors

V3.0.4

RHEL 5.8 x86 & x86_64

RHEL 6.3 x86_64

SLES 11 SP2 x86_64

PowerEdge M915

AMD Opteron™ 6300 series server processors

V3.0.4

RHEL 5.8 x86 & x86_64

RHEL 6.3 x86_64

SLES 11 SP2 x86_64

PowerEdge R415

AMD Opteron™ 4300 series server processors

V2.0.2

RHEL 5.8 x86 & x86_64

RHEL 6.3 x86_64

SLES 11 SP2 x86_64

PowerEdge R515

AMD Opteron™ 4300 series server processors

V2.0.2

RHEL 5.8 x86 & x86_64

RHEL 6.3 x86_64

SLES 11 SP2 x86_64

Click here to view the AMD processors and supported Red Hat Enterprise Linux (RHEL) versions.

Note: The core enumeration order on AMD Opteron™ 4100, 4200, 4300, 6100, 6200, and 6300 series processors has changed. The new processor core enumeration order is designed to facilitate better power management and can be enabled by upgrading to the latest BIOS version for your system from support.dell.com. Refer the user guide for information on BIOS version.

Announcing the release of Red Hat Enterprise Virtualization (RHEV) 3.1 on Dell PowerEdge servers

$
0
0

Red Hat announced the release of Red Hat Enterprise Virtualization (RHEV) 3.1, the first major update to its enterprise virtualization platform RHEV 3.0.

Some of the features in RHEV 3.1 include,

  • Increased hypervisor support for the latest Intel & AMD x86 processors
  • RHEV can now support up to 160 logical CPUs and  up to 2 terabytes of memory
  • Supports creation of live snapshots and clones of running virtual machines
  • Integrated dashboard that provides wide range of reports at the datacenter, cluster, host, and virtual-machine level
  • Python SDK (software development kit) to help write scripts
  • And, in technical preview mode supports storage migration for virtual machines, in which the backup disk image of a running VM can be moved to different storage devices without stopping the running virtual machine

Please refer the manager release notes for complete list of new features.

RHEV 3.1 is validated and offered on all supported Dell PowerEdge servers including the latest Dell PowerEdge 12th generation serversPlease refer the important information guide for the complete list of issues fixed in RHEV 3.1 on Dell PowerEdge servers.

Dell OpenManage Server Administrator (OMSA) is not supported on RHEV 3 hypervisors. For customers looking for OpenManage Server Administrator support in RHEV 3 environment, RHEL 6.3 can be used as a host. Please refer the Red Hat Enterprise Virtualization documentation for more information on using RHEL 6 as a host.

Announcing the release of Red Hat Enterprise Linux 6.3 on Dell PowerEdge servers

$
0
0

Starting December, 2012, Red Hat Enterprise Linux 6.3 will have Dell OpenManage 7.2 support and will be offered as a factory installed Operating System on Dell PowerEdge servers.

Some of the enhancements in Red Hat Enterprise Linux 6.3 include,

  • Support for latest AMD 6300 and 4300 series processors
  • Support for PCI-e SSD (solid state drive) drivers
  • Support for QLogic and Emulex 16Gbps FC adaptors
  • Device driver updates and bug fixes

Refer to Red Hat Enterprise Linux 6.3 blog for more info.

Red Hat Enterprise Linux 6.3 has been extensively validated on supported Dell PowerEdge servers. Please refer to the important information guide for the complete list of issues fixed in Red Hat Enterprise Linux 6.3 on Dell PowerEdge servers.

Red Hat announced the global availability of Red Hat Enterprise Linux 6.3 on June 21, 2012. Refer the datasheet for detailed information on new features and benefits in Red Hat Enterprise Linux 6.3

Dell Webinar: Best Practices for VM Overcommittment

$
0
0

Join Dell vKernel team for a 1 hour webinar on Best Practices for VM Overcommittment. 

Date and time:

Tuesday, January 8, 2013 11:00 am
Eastern Standard Time (New York, GMT-05:00)

Duration:

1 hour

Description:

The rise of virtualization has created a significant opportunity for efficiency in the data center. The drag of idle resources has not gone away, however, leading administrators to oversubscribe the physical resources that exist on a host, often with the help of vSphere or other resources.

But just how far can this overcommitment be taken and when does it become a problem?

Join virtualization evangelist Jonathan Klick for this 60-minute webcast to learn about:

- The pros and cons of overcommitment
- What is available in vSphere today
- Getting to it – where to start and metrics to watch when oversubscribing processor, memory, and storage resources

Register HERE

OpenStack Board of Directors Talks: Episode 3 with Randy Bias, Co-Founder & CTO at Cloudscaling

$
0
0

Learn firsthand about OpenStack, its challenges and opportunities, market adoption and Cloudscaling’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!

#1 Takeaway: OpenStack fundamental architecture is very sound

Rafael: Let’s start with a very basic question, Randy. What is the value that OpenStack brings to the table?

Randy: The core value of OpenStack is that the fundamental architecture is very sound. If you want to build Infrastructure as a Service (IaaS) in a scale out manner, then you need an asynchronous, loosely coupled, message based type of solution, so that you can create a distributed software system.

We looked at a number of different technologies on the market place today including CloudStack and OpenNebula, and we missed a combination of both components. In case of CloudStack, where we did one of the three largest deployments, the code base is monolithic. All the code runs in one place. And in the case of OpenNebula you have distributed software, but it lacks an asynchronous message passing paradigm.

#2 Takeaway: OpenStack key accomplishments are: 1) high profile contributors, 2) regular release cycle, 3) governed by a foundation and 4) broad suit of services

Rafael: What are the key accomplishments of OpenStack so far?

Randy: First, a number of high profile appointments like AT&T, HP and Dell. Second, a regular cadence in the major release cycle, which is a key factor in any large open source project. OpenStack has a six-month cadence with six major releases so far, and it’s becoming better and better with every new version. Third, the formation of the OpenStack Foundation, a single neutral entity that can support OpenStack and its mission, rather than being owned by any one organization. Fourth, OpenStack branched out from the core of Nova and Swift into Nova, Cinder, Horizon, Glance, Swift, Quantum and Keystone in the current OpenStack Folsom release. With that, OpenStack has a full end-to-end suite of services you can pick from, whereas other IaaS open source projects are focusing on a single piece of the problem like compute or networking, for example.

#3 Takeaway: OpenStack needs a broader market adoption and a clear mission

Rafael: … and what does still needs to be worked on in OpenStack, Randy?

Randy: We still need significantly more adoption in the marketplace, although I don’t know how to quantify that well. A lot of people are in the process of evaluating OpenStack, and I think that a broader market adoption is just a matter of time.

Also, since the formation of the OpenStack Foundation, there’s an opportunity to re-evaluate what the mission for OpenStack should be. Initially the mission for OpenStack was defined by Rackspace. Now that Rackspace doesn’t own OpenStack anymore, the project’s mission needs to be clarified.

#4 Takeaway: OpenStack Foundation has two options: 1) a hands off and 2) a hands on approach to govern the project

Rafael:  What are viable options for OpenStack?

Randy: One option would be a similar approach to how the Linux Foundation exists in the market place.

The Linux Foundation doesn’t really say what Linux should be. It allows the marketplace and the key contributors to make those decisions themselves. Linux has a reasonable amount of standardization across the distributions, but yet all of them are designed for completely different purposes. RedHat Enterprise is seen by many as designed for servers in the way that Ubuntu is for desktops. There are even more specialized Linux distributions for things like embedded systems.

That sort of hands off approach would mean that OpenStack becomes a framework that can be used by anybody in the ecosystem to build all sorts of clouds. In that case the market forces would determine which distributions rise to the top and which don’t … just like is the case with Linux.

Alternatively, the OpenStack Foundation could take a hands on approach and make top down determinations about OpenStack standards … what OpenStack supports, which other cloud software it’s compatible with etc. That approach might get us sooner to interoperability with other clouds such as Amazon Web Services or Google Compute Engine, but it may alienate those members of the community who don’t want to go in the direction which the OpenStack Foundation and Technical Committee may decide makes sense. These folks might then want to build their own system, and that might create a threat of forking OpenStack.

Rafael: Hands off or hands on: Which approach would you prefer for OpenStack?

Randy: I have seen in the past that standards in interoperability seem to bubble up through market adoption. Once customers decide what they want, developers can respond to that rather than trying to predict what the market might want. We all know: Human beings are the worst predictors of the future (laughs).

#5 Takeaway: Cloudscaling focusses on improving OpenStack’s computing capabilities and API compatibility with other public clouds

Rafael: Let’s talk a bit about Open Cloud System– your company’s OpenStack distribution. What makes it unique in the marketplace?

Randy: Before I answer that question let me explain briefly that we use 100% stock OpenStack in our product, Open Cloud System. We don’t modify it, we don’t fork it. Our team has a background in building large scale out, production grade systems that are compatible with other clouds, and that’s our primary focus with OpenStack. We’ve built an integrated system solution rather than a bunch of separate components.

OpenStack from our point of view is a great technology, but it’s a little bit like the Linux kernel. You probably wouldn’t take it and run it in production. Just as with the Linux kernel, you actually have to do a number of things in order to make it robust and production ready … and by production ready we mean that there is a focus on availability, security, performance and maintainability.

OpenStack has lot of configuration parameters, and we simply took advantage of that in order to provide functionality that isn’t available in default OpenStack. Let me give you one example.

Default OpenStack comes with Nova, the compute component. It has two deployment modes. The first deployment mode is a centralized service, that all of your network traffic goes through. It has VLANs behind it, and every tenant of your cloud is on a VLAN. The challenge is that you have a central choke point for all your traffic. Regardless of how fast your switch fabric is … you’re driving traffic for all your VMs through this central Linux box which is obviously not ideal.

The second deployment mode is distributed, where you are also using VLANs … with your Nova network controller running on all your hypervisors. Obviously there are security concerns around that. But taking that aside, you end up making some compromises, because of the Nova network controller architecture with both public and private IPs running on your internal switch fabric. A lot of people don’t like to do that, they prefer to keep their public IPs at the edge of the network. In addition to that you have these weird bugs that crop up, because the network address translations are happening on each of the hypervisor nodes, instead of at the edge of your network like you would normally expect. There is a bug right now in OpenStack which is not fixable because of the architecture and you cannot use floating IPs.

This is really clunky. Open Cloud System takes a very different approach. We don’t use any VLANs. We use Layer 3 network routing just like Amazon Web Services does. Our networking model looks exactly like Amazon Web Services Elastic Compute Cloud (EC2).

We have a separate NAT service that runs at the edge of the system. Your public and private IPs are not running together on the switch fabric. We have a distributed DHCP service that runs on all hypervisors; the Nova network controller is off to the side and no network traffic goes through it anymore. Because of that we get full end to end bandwidth between all the VMs and tenants, and we get full throughput to the internet ingress and egress.

Rafael: Can you give some more examples of what your distribution does differently than raw OpenStack?

Randy: Sure. Default OpenStack comes with RabbitMQ as the messaging broker, and we felt that created a single point of failure which didn’t make sense to us, especially in larger deployments. So we came up with an Alternative Approach to OpenStack Nova RPC Messaging.  

We also spent time on pushing code back into OpenStack for API compatibility; we do a lot of work on AWS EC2 API and we recently announced that we are providing a set of APIs that are compatible with Google Compute Engine. Now people have a choice when they use OpenStack compute project Nova, whether they want to use the OpenStack native API, the AWS API or the GCE API.

#6 Takeaway:  Early adopters such as the finance sector are embracing OpenStack, businesses with less sophisticated IT needs might jump in later

Rafael: Randy, we talked very briefly about market adoption earlier. Let’s dwell a bit more on that. Who are the early adopters and do you see signs of OpenStack going mainstream?

Randy: Cloud computing is a new paradigm in cloud computing which is driven by large scale web companies. When you look Google, Amazon, Facebook or Twitter data centers: They don’t build up systems that look like traditional enterprise data centers.

OpenStack will be broadly adopted by the enterprise over time. If you look at other open source projects such as Hadoop … it gives the average enterprise a competitiveness that Google previously had with MapReduce. Many companies are comfortable with Hadoop already, and I believe the same will happen with OpenStack over time.

In general, we see two sets of customers. You have companies that see IT as a competitive advantage. They will refresh their datacenter infrastructure over the next 5 to 10 years, and they will embrace models such as OpenStack and Hadoop. Financial services companies are already very much involved with both open source projects. Usually they are a good indicator for other industries to follow later.

And then you have companies for which IT is not a significant competitive advantage, such as shipping, and logistics … they are just tracking where their ships or containers are and they don’t have significant need for sophisticated IT solutions. I think that over time those companies might adopt public clouds designed to run their workloads.

#7 Takeaway: Getting the software & solutions provider DNA into Dell is crucial for the future

Rafael: Last question, Randy. How do you view Dell in the OpenStack game?

Randy: We are currently in the middle of a fundamental change in IT, which can be compared to the transition from mainframe to enterprise computing. A lot of the mainframe companies didn’t survive. Those who did like IBM took a very close look at emerging technologies, and they changed their business model.

Dell is making a lot of changes in order to become a software and solution company. I think being a solution provider is the only way for a company like Dell, which is traditionally a hardware supplier, to survive in the future.

All the right moves have been made at Dell, and it’s more a question of execution: How do you get the software DNA into Dell? How do you get system thinking DNA into the business? It’s all about finding answers to those questions.

Rafael: Thank you very much for this interview, Randy.

Randy: You’re welcome!

Resources

Cloudscaling

Company website: http://www.cloudscaling.com/

Company blog: http://www.cloudscaling.com/blog/

Twitter: https://twitter.com/randybias

Feedback

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com


Dell Open Source Ecosystem Digest: OpenStack, Hadoop & More 1-2013

$
0
0

This week’s highlight: “OpenStack Board of Directors Talks: Episode 3 with Randy Bias, Co-Founder & CTO at Cloudscaling”. Enjoy reading!

OpenStack

enStratus: “Successful Application Lifecycle Management in the Cloud” webinar 01-29-2013
https://enstratus-events.webex.com

OpenStack Foundation: “Save the Date – OpenStack Summit Spring 2013”
http://www.openstack.org/blog/2013/01/save-the-date-openstack-summit-spring-2013/

OpenStack Foundation: “2nd Swiss OpenStack Meetup”
http://www.openstack.org/blog/2013/01/2nd-swiss-openstack-meetup/

Rackspace: “Book Review: ‘Cloudonomics: The Business Value Of Cloud Computing’” by Ben Kepes
http://www.rackspace.com/blog/book-review-cloudonomics-the-business-value-of-cloud-computing/

Rackspace: “Bridging The Chasm Between IT And The Business” by Ben Kepes
http://www.rackspace.com/blog/bridging-the-chasm-between-it-and-the-business/

Rackspace: “Adopting a Service Provider Mentality and Empowering a Self-Service Organization” roundtable 01-09-2013
http://www.rackspace.com/blog/turning-big-data-into-big-dollars-enterprise-open-cloud-forum-webinar-recap/?1b7f9d90

SUSE: “What 2013 Will Bring From The Cloud” by Brian Proffitt
https://www.suse.com/blogs/what-2013-will-bring-from-the-cloud/

Hadoop

Datameer: “Building a Big Data Recommendation Engines” webinar 01-30-2013
http://info.datameer.com/Building-Big-Data-Recommendation-Engine.html

Datameer: “Our 10 most popular reads of 2012” by Susan Puccinelli
http://www.datameer.com/blog/big-data-analytics-perspectives/our-10-most-popular-reads-of-2012.html

Dell

“OpenStack Board of Directors Talks: Episode 3 with Randy Bias, Co-Founder & CTO at Cloudscaling” by Rafael Knuth
http://en.community.dell.com/techcenter/b/techcenter/archive/2013/01/03/openstack-board-of-directors-talks-episode-3-with-randy-bias-co-founder-amp-cto-at-cloudscaling.aspx

“Dell OpenStack Powered Cloud Solution awarded! 2012 Open Source Cloud Innovation Solution @CNW”
http://www.cnw.com.cn/zhuanti/sk/banjiang2012/cloud_4.html

Contributors

Please find detailed information on all contributors in our Wiki section.

Contact

Twitter: @RafaelKnuth
Email: rafael_knuth@dellteam.com

iDRAC7 now support configuring server BIOS, iDRAC, RAID and NIC using XML file and Racadm

$
0
0

This blog post is written by Shine KA and Lakshminarayanan R from Dell iDRAC team.

                You can now use Racadm command line interface to configure BIOS, iDRAC, RAID and NIC on the server using XML file. You can make use of various racadm interfaces (Local, Remote and Firmware) to export system config to XML file and import the XML config file to the server from XML file. You can use this feature to take a backup of system configuration, configuring multiple server BIOS, iDRAC, RAID and NIC and to script configuration of BIOS, iDRAC, RAID and NIC of the server. You need to use Open Manage 7.2 and iDRAC Firmware 1.30.30 onwards to use this feature.

 

Exporting System Config to XML file:

                Using Racadm you can export system configuration to a XML file. This can be done using any racadm interface (Local, Remote and FW Racadm – SSH, Telnet and Serial). Export of the XML file can be done to a Local File (Using Local and Remote Racadm) or to a remote CIFS / NFS share (Using Local, Remote and FW Racadm). Once XML Export command is initiated a job will be created for export operation (Using “racadm jobqueue view –i <JOB ID>” command you can get the status of export job.). Once job is completed the XML file which contains configuration of BIOS, iDRAC, RAID and NIC will be created on the mentioned location. You need to have iDRAC/OS Administrator privilege to do XML export.  The read only objects will be enclosed with exclamatory tag in that XML file. iDRAC should have minimum of “express” license to export the xml file.

The command syntax to export system config

“racadm get –t xml –f <filename.xml> [–l <share IP/path>] [–u <username>] [–p <password>]”.

E.g.

CIFS Share:

            “racadm get –t xml –f <filename.xml> –l <//share IP/path> –u <username> –p <password>”.

NFS Share:

            “racadm get –t xml –f <filename.xml> –l <share IP:/path>”.

Local Share:

            “racadm get –t xml –f <path/filename.xml>”

Importing System Config from XML file:

            Once System config is exported to XML file, you can import the configuration back to the system any time. You can also modify various setting on XML file before importing the XML file to iDRAC. Similar to Export, Import of the XML file can be done from a Local File (Using Local and Remote Racadm) or from a remote CIFS / NFS share (Using Local, Remote and FW Racadm). After import operation, server goes for reboot if there are any changes on BIOS, RAID and NIC attributes. Server does not require a reboot if there are changes only to iDRAC attributes or if there is no change in that exported XML file. The command itself we can specify the Shutdown type which needs to be done before import starts and end state of the server after import. The shutdown type refers the shutdown method of the server once the Import is begun, supported values “Graceful & Forced”. End state used to specify the Server state once the import operation completes, supported values are “On & Off”. Once XML Import command is executed a job will be created for an import operation (Using “racadm jobqueue view –i <JOB ID>” command you can get the status of export job.). You need to have iDRAC/OS Administrator privilege to do XML import. iDRAC should have minimum of “express” license to import the xml file.

The command syntax to export system config

“racadm set -t xml –f <filename.xml> [–l <share IP/path>] [–u <username>] [–p <password>] [–b <Shut Down Type>] [–w <Wait Time>] [–s <End Power State>]”.

E.g.

CIFS Share:

            “racadm set –t xml –f <filename.xml> –l <//share IP/path> –u <username> –p <password> [–b <Shutdown type>] [-w <wait time>] [-s <End state>]”.

NFS Share:

            “racadm set –t xml –f <filename.xml> –l <share IP:/path> [–b <Shutdown type>] [-w <wait time>] [-s <End state>]”.

Local Share:

            “racadm set  –t xml –f <path/filename.xml> [–b <Shutdown type>] [-w <wait time>] [-s <End state>]”

Sample XML file:

Additional Information

iDRAC7 1.30.30 Firmware can be downloaded from here

iDRAC7 1.30.30 User Guide

More information on iDRAC

More information on Racadm

iDRAC7 now supports NTP and Time Zone configuration

$
0
0

This blog post is written by Shine KA and Sanjeev Singh from Dell iDRAC team.

 

           It is no longer difficult to configure or sync iDRAC7 time with respect to server time or to NTP servers.  There is no need to set complicated time zone and daylight offset setting on iDRAC now. In addition to iDRAC syncing time with server, iDRAC Firmware 1.30.30 release onwards user can configure iDRAC to sync time with NTP server as well. Also you can set time zone on iDRAC by selecting required time zone from a list of 500+ time zones. Based on time zone selected and current iDRAC date, daylight offset will be applied automatically if applicable. Both NTP and Time zone can be configured using Web GUI, Racadm or WSMAN interface. iDRAC should have minimum of Express License to use NTP and Time zone feature. 

          One of the main challenge users have on iDRAC was to sync/set the time of iDRAC. In Dell 12G servers, iDRAC can get time from system Real Time Clock (RTC) through the chipset even when the server is powered off.  After iDRAC gets time from RTC, user has to set time zone using racadm and if daylight offset is applicable for the selected time zone, user needs to set daylight offset object also using racadm. This offset needs to change whenever daylight offset change occurs. Because of this complexity, users have difficult time in configuring iDRAC for SSO, Smart Card, AD, LDAP and uploading newly created certificates.

           

With iDRAC 1.30.30 release onwards user can configure NTP/Time Zone on iDRAC in two ways

Only set time zone.

In this method, iDRAC will automatically have it’s time synced with server. And users only need to configure time zone on iDRAC without having to configure the daylight offset through racadm. You can configure time zone from "iDRAC Settings -> iDRAC Settings" Page of iDRAC Web Interface. From racadm you can use idrac.time group to configure various time zones.

 

Set Both NTP and Time zone

In this method user will configure NTP and Time zone. In this case iDRAC will sync the time with NTP server and selected Time zone will be also applied. iDRAC will sync NTP server’s UTC/GMT time, so it is important that you configure correct time zone while configuring NTP on iDRAC. You can configure up to three NTP servers. It is always recommended to configure all three NTP servers for accuracy. We support IPv4, IPv6 and FQDN format for configuring NTP server address When NTP mode is enabled and iDRAC has received time through this interface, iDRAC will ignore any attempts to set iDRAC time through the server (BIOS, Open Manage, etc.) or CMC (for blade servers).

 

 Additional Information:

iDRAC7 1.30.30 Firmware can be downloaded from here

iDRAC7 1.30.30 User Guide

More information on iDRAC

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #10

$
0
0

Hi Community,

here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you’d like me to add your blog posts to my weekly compilation, please send me an email (florian_klaffenbach@dell.com) or reach out to me via Twitter (@FloKlaffenbach). Thanks!


Featured Posts of the Week!

Money Saving Hero of 2012: Windows 2012 In Box Deduplication Delivers Big Value by Didier van Hoye

#PSTip List all cmdlets that accept a specific object type as input by Ravikanth Chaganti

Cluster Aware Updating – Cluster CNO Name 15 Characters (NETBIOS name length) GUI Issue by Didier van Hoye

Simple vWorkspace Connection Broker Monitoring with PowerShell by Adam Driscoll


Hyper-V

Hyper-V ID Hash Table by Jeffery Hicks

A WS2012 Hyper-V Converged Fabric Design With Host And Guest iSCSI Connections by Aidan Finn

Is Dynamic Memory Coming To Linux Virtual Machines on Hyper-V? by Aidan Finn

Hyper-V Manager Windows Store App by Aidan Finn

Error Message “Video Remoting Was Disconnected” When VM Connect to a Virtual Machine by Lai Yoong Seng

Video Hyper-V Talk mit 4 MVPs Virtual Maschine by Carsten Rachfahl

Was geht mit Hyper-V, und welche Lizenzen braucht der Kunde dafür? in German by Nils Kaczenski

Hacking VMConnect by Adam Driscoll

Windows Server Core 

Money Saving Hero of 2012: Windows 2012 In Box Deduplication Delivers Big Value by Didier van Hoye

Microsoft Virtualisierungs Podcast Folge 26: SMB 3.0 Teil 1 in German by Carsten Rachfahl

KB2777646 – SMB Multichannel Skips Non-Routable IP Addresses Of NIC If Routable IP Addresses Also Configured by Aidan Finn

Windows Server Cluster

KB2549472 – W2008 R2 Cluster Node Cannot Rejoin Cluster After Node is Restarted Or Removed From Cluster by Aidan Finn

KB2549448 – W2008 R2 Clustering Uses Default Time-Out After You Configure Regroup Timeout Setting by Aidan Finn

Cluster Aware Updating – Cluster CNO Name 15 Characters (NETBIOS name length) GUI Issue by Didier van Hoye

System Center Virtual Machine Manager

Overview of System Center 2012 SP1 – Virtual Machine Manager #SCVMM #sysctr by James van den Berg

The Network Virtualization Guide with SC 2012 – VMM SP1 by Kristian Nese

SCVMM 2012 SP1 Error: Windows network virtualization is not enabled on a host NIC available for placement by Thomas Maurer

System Center Data Protection Manager

Upgrade DPM 2010 to DPM 2012 SP1 by Lai Yoong Seng

Exchange

Setting up a Geo DNS environment in your test lab – part II by Johan Veldhuis

PowerShell

#PSTip List all cmdlets that accept a specific object type as input by Ravikanth Chaganti

#PSTip Monitor file growth using WMI by Ravikanth Chaganti

#PSTip How to clear the $error variable by Aleksandar Nikolic

#PSTip Working with ISO files by 

#PSTip How can I tell if I’m in a remote PowerShell session? by 

Simple vWorkspace Connection Broker Monitoring with PowerShell by Adam Driscoll

Pseudo-Polymorphism in PowerShell by Adam Driscoll

Friday Fun: Edit Recent File by Jeffery Hicks

Security

Sicherheit ganzheitlich betrachten und umsetzen in German by Nils Kaczenski

Word

Word 2010: Vorlage neu verbinden in German by Nils Kaczenski

Events

Special PowerShell and Group Policy Session Feb 6 by Jeffery Hicks


Other MVPs I follow

James van den Berg - MVP for SCCDM System Center Cloud and DataCenter Management
Kristian Nese - MVP for System Center Cloud and Datacenter Management
Ravikanth Chaganti - MVP for PowerShell
Jan Egil Ring - MVP for PowerShell
Jeffery Hicks - MVP for PowerShell
Keith Hill - MVP for PowerShell
David Moravec – MVP for PowerShell
Aleksandar Nikolic - MVP for PowerShell
 - MVP for PowerShell
Adam Driscoll - MVP for PowerShell
Marcelo Vighi - MVP for Exchange
Johan Veldhuis - MVP for Exchange
Lai Yoong Seng - MVP for Virtual Machine
Rob McShinsky - MVP for Virtual Machine
Hans Vredevoort - MVP for Virtual Machine
Leandro Carvalho - MVP for Virtual Machine
Didier van Hoye - MVP for Virtual Machine
Romeo Mlinar - MVP for Virtual Machine
Aidan Finn - MVP for Virtual Machine
Carsten Rachfahl - MVP for Virtual Machine
Thomas Maurer - MVP for Virtual Machine
Alessandro Cardoso - MVP for Virtual Machine
Robert Smit – MVP for Cluster
Marcelo Sinic - MVP Windows Expert-IT Pro
Ulf B. Simon-Weidner - MVP for Windows Server – Directory Services
Meinolf Weber - MVP for Windows Server – Directory Services
Nils Kaczenski - MVP for Windows Server – Directory Services
Kerstin Rachfahl - MVP for Office 365

No MVP but he should be one

Jeff Wouters - PowerShell

Dell TechCenter Rockstars - Building Online Community through Relationships (video)

$
0
0

Like any real life community, the foundation of an online community like Dell TechCenter is relationships between its members.

The Dell TechCenter IT community focuses on professionals that work in a data center and has been going strong for over five years now thanks to the thousands of members who contribute to the online conversation (whether it takes place in forum posts, blog comments, or on Twitter) and the dedicated staff at Dell that moderates the site, answers customer questions, and creates new Dell related  technical content  for the community.

In recent days TechCenter members who have been interacting online for years finally got to meet in person.  In December 2012, many Dell TechCenter Rockstars (who are among the top 1% of users on the site) came to Austin, Texas to participate in the Dell World 2012 conference to learn more about IT industry topics, interact with Dell staff, and get to know each other better. 

As part of their experience in Austin, several TechCenter Rockstars collaborated on a panel to give insights on their community experiences and what makes a successful online B2B community. They shared their motivations for interacting with companies and other IT professionals online, and highlighted many examples of community best practices.

(Please visit the site to view this video)

In their 20 minute community session, TechCenter Rockstars gave insightful answers to many questions:

  • What is the DellTechCenter.com IT community? (0:45)
  • What do the top 1% of users on community look like?  (3:05)
  • Who are the Dell TechCenter Rockstars? (5:29)
  • Why join a community and community advocacy groups like TechCenter Rockstars? (7:00)
  • What motivates community members to participate in and share? (8:17)
  • What are best practices in community motivation techniques? (10:19)
  • Why is it OK to ask dumb questions online? (11:15)
  • Do online B2B communities help customers trust a company more? (12:45)
    • What products are a result of community collaboration? (13:30)
  • Do online B2B communities effect customer purchase decisions & recommendations? (15:06)
  • What are the essential ingredients of a successful business community? (16:25)
  • Besides Dell TechCenter, what are great examples of online communities? (17:30)
    • How do Tech Professionals / IT Administrators us Twitter?

The Dell TechCenter Rockstars representing in this video include (from left to right):

The moderator of the panel is Jeff Sullivan, Online Communities Lead at Dell.

The MC who introduced the TechCenter Rockstars session is Sean Carey.

 

The following slides were used during the presentation you viewed above:

 

 

 

 

  

Do you have any additional best practices to share or questions about community? Let us know in the comments!

Blade server topology options with external Top of Rack switches for EqualLogic arrays - Part 2

$
0
0

First part of this discussion: http://en.community.dell.com/techcenter/b/techcenter/archive/2013/01/07/blade-server-topology-options-with-external-top-of-rack-switches-for-equallogic-arrays-part-1.aspx

In the second part of this discussion, I would like to highlight some of the possible blade server switch topologies with ToR using stack, Link Aggregation Group (LAG), or Virtual Link Trunking (VLT) as switch interconnects. I will also highlight whether the topologies are ideal for internal, external arrays, or both.

 

Possible Stacking Topologies:

 

With most stacking implementations, switch firmware upgrades require restarting the switches in the stack. In the Dual Stack (top left) configuration, the blade IOM switches are stacked to each other and the ToR switches are stacked to each other. A single LAG inter-connects these 2 stacks. In the Single Stack (top right) configuration, only the ToRs are stacked and each blade IOM has a LAG to the stack. In the second Single Stack (bottom left) configuration, only Blade IOMs are stacked and each ToR has a LAG into the stack.

Note: It is recommended to have the internal arrays in a separate EqualLogic pool from the External arrays. This avoids the two hops needed for internal and external arrays in same pool to communicate. If the SI configuration is good for a particular configuration, it implies that the switch hop count between internal arrays and/or between external arrays in same pool is at the minimum possible value of 1. The minimal hop count between arrays in a pool keeps the latency for inter-array network traffic within a pool to a minimum.

 

Possible LAG Topologies:

A particular LAG interconnect can be intentionally blocked with Spanning Tree Protocol (STP) by manually setting a high link cost. If another LAG incurs failure, this LAG would become active after STP re-convergence.

 

Possible VLT Topologies:

A pair of ToR switches can be set up in a VLT domain with the VLT interconnect (VLTI) between them. Blade IOM switches can have a single LAG link that span across the VLT domain switches. The VLT domain appears as a single LAG destination for the blade switches. If the blade IOM switches are stacked as in the configuration on the left, then a single VLT LAG connects the stack to the VLTI domain pair. If the blade IOM switches are not stacked, then each blade IOM has a VLT LAG going into the VLTI domain pair.

In summary, various factors need to be considered in designing a blade server network topology for connecting to external storage. These include switch interconnect technologies and their associated benefits or limitations for high availability, scalability, manageability and performance. The presence of blade chassis internal arrays in addition to external arrays also impacts the design.

 

My colleagues Clay Cooper and Guy Westbrook are publishing a series of white papers on detailed test studies and best practices on blade connectivity topologies for EqualLogic:

 

My team and our publications: http://en.community.dell.com/techcenter/storage/w/wiki/2631.storage-infrastructure-and-solutions-team.aspx

Announcing iDRAC7 support for Safari and Google Chrome browsers.

$
0
0

This blog post is written by Shine KA and Meghna Taneja from Dell iDRAC team.

We are excited to announce that the iDRAC7’s latest release(1.30.30 firmware onwards) now supports Safari and Chrome browser in addition to IE and FF.

Google Chrome Browser

            You can use Chrome browser (Version 22) from Windows 8 or Windows 2012 to manage and monitor iDRAC7. All iDRAC7 GUI features can be enjoyed through Chrome as well.

 

Apple Mac Book as iDRAC7 Client

            Now you can use Mac system to Manage and Monitor Dell Servers using iDRAC7. You can use either Firefox or Safari (Version 5.X) Browser to Manage iDRAC from Mac.  All iDRAC7 features accessible from the iDRAC GUI, example vConsole (Java Plugin), vMedia (Java Plugin), Boot Capture etc. can be accessed from Mac client.

 How to Configure Apple MacBook as an iDRAC7 Client

Launching iDRAC using IPv6 Address

       You can not launch iDRAC GUI when IPv6 address is used to launch iDRAC and iDRAC have deafult certificate. To use IPv6 to launch iDRAC, the user needs to either

1). Upload a SSL certificate from valid Certificate Authority to iDRAC and use IPv6 Address to launch iDRAC

Or

2). Register iDRAC on DNS and use iDRAC DNS name (FQDN) to launch iDRAC GUI.

“SingleCursor” and “Pass All Keystroke” mode in vConsole for iDRAC

          “Single Cursor” and “Pass All Keystroke” mode will not work if “Enable access for assistive devices” id disabled on MAC Client. This option can be enabled by selecting “Enable access for assistive devices” checkbox on the Universal Access System Properties page (see screenshot below).  If this option is not selected a warning message will be shown when user tries to launch Virtual Console.

Virtual Media – Mapping USB Key as Read Write

       When user connected a USB Key to MAC client and if it is mounted to MAC as R/W, then iDRAC Virtual Media can use this device as Read Only and R/W will not be supported from virtual media. To connect USB Key as R/W from virtual media, user needs unmount USB key from MAC client:

1). Open terminal window from MAC

2). Run the command to unmount: Diskutil unmountdrive /Volumes/drivename (sudo may be required for access).

 

Additional Information:

iDRAC7 1.30.30 Firmware can be downloaded from here

iDRAC7 1.30.30 User Guide

More information on iDRAC


Storage for VDI: “Which EqualLogic hybrid array shall I use?”

$
0
0

To deliver a consistent, secure and reliable personal desktop experience to the ever increasing mobile and remote user base, many IT organizations are considering virtual desktop infrastructure (VDI) deployment. Centralized management of virtual desktops helps simplify desktop deployment, software updates, and patch management. So, desktop virtualization can significantly reduce the operating costs of managing a very diverse client device environment.

Like everything else in life, VDI comes with its own set of challenges though. In particular, cost and management associated with storage for VDI deployments can be a significant hurdle. VDI storage requires the capacity to store not only desktop virtual machines, but also the associated end-user application data. Storage for VDI should also deliver the performance necessary for handling utilization spikes of short duration—I/O storms—such as those generated when large numbers of virtual desktops are booted simultaneously or hundreds of end users log in or log off at the same time.

Dell EqualLogic PS Series iSCSI hybrid SAN arrays are well suited for VDI environments. These hybrid arrays combine both solid-state drives (SSDs) and traditional hard disk drives (HDDs) in the same chassis and include features such as automated data tiering and centralized management tools to support efficient VDI deployments.

We started with a single hybrid model nearly 2.5 years ago, and now EqualLogic offers three different hybrid array models. With the last two hybrid models introduced since last summer, the most frequent question we are getting from our customers is this: “Which EqualLogic hybrid array shall I buy for my VDI deployment?”

This article addresses the question. In a nutshell, the answer is, “It depends on your VDI use case.” (What else could it be? :-) )

See the figure below for the SSD/HDD capacity ratio and VDI workload performance positioning for all the three hybrid models before we discuss the VDI use cases for these three models.

EqualLogic PS-M4110XS series hybrid arrays provide a blade form factor suitable for a comprehensive, self-contained VDI solution within a modular and compact blade enclosure. Together with Dell PowerEdge™ blade servers and Dell Force10™ blade switches, these hybrid blade arrays create a data center in a box for VDI deployments. This approach helps organizations reduce virtual desktop deployment and operational costs through efficient use of switching resources, minimized cabling, and consolidated management. A hybrid of 2 TB SSD and 5.4 TB serial attached SCSI (SAS) drive capacity offers a performance-to-capacity ratio well suited for efficient, high-density, compact form factor VDI deployments with a comprehensive, converged compute, networking, and storage infrastructure.

EqualLogic PS6110XS and EqualLogic PS6100XS series hybrid arrays are designed to offer VDI-optimized storage. Combined high-performance SSD capacity and capacity-optimized SAS drives provide a hot-to-warm data ratio that is optimized for high-performance virtual desktop environments. These environments include medium and large VDI deployments with mixed end-user profiles, including profiles for task workers, knowledge workers, and power users. SSD capacity is available for frequently accessed desktop virtual machine data, while end-user application data can be stored on capacity-optimized drives in either of these arrays or in preexisting capacity located on other EqualLogic PS Series arrays. In addition to automatic tiering within the hybrid arrays, EqualLogic PS Series software can automatically tier data across arrays based on utilization rates when multiple arrays are deployed.

EqualLogic PS6510ES and EqualLogic PS6500ES series hybrid arrays can be an excellent fit in organizations deploying VDI environments that are smaller in scale and less-demanding virtual desktop deployments than medium or large VDI environments. One example is VDI for task workers. These arrays also provide additional storage suitable for other capacity-oriented consolidation workloads within a single array. Substantial SSD capacity helps support high-performance, low-latency desktop virtual machine storage. Large-capacity nearline SAS drives provide cost-effective high capacity for less frequently accessed end-user application files and other consolidation workloads.

Check out the details of these uses cases in this article.

What is your favorite retro gadget? IT pros get nostalgic about vintage gear

$
0
0

With holiday gift buying behind us, geeks around the world are no doubt enjoying the spoils of the season:  the latest tech gadgets and computer gear.  Today we live in an amazing age of  impossibly thin high definition flat screen TVs and smart devices that are more powerful than many of our first computers.  

However, no matter how cool an electronic device in its time, no gadget stays new forever.  

Years later, the once state of the art electronic gear we lusted after becomes obsolete and quaint in hindsight.  In this post we reflect on gadgets that we loved back then and despite their obsolescence, we still have fond memories of now.  We asked friends of Dell TechCenter  to share stories about their favorite gadgets from the past.  Here's what they had to say:

What is your favorite retro gadget?


  

(Image Courtesy of Wikipedia Commons)

 

Peter Tsai - Dell TechCenter crew

In 1999, the PDA (Personal Digital Assistant) was a huge novelty. ThePalm V, with its whopping 2MB of memory, serial cable sync connectivity, black and white screen (with Indiglo), lithium ion battery in a thin form factor, and touch screen made it stand out.  I was using the Palm V all the time in my college classes and to my professors I probably looked like the most studious kid in the class.  While sometimes I was taking notes, most of the time I was actually playing spades or minesweeper to stay awake.


Kong Yang - Dell TechCenter crew

The Nintendo Entertainment System (NES). I spent countless hours with family and friends playing Super Mario Brothers, Mike Tyson’s Punch-Out, Zelda, Contra, Tecmo Super Bowl Football, Double Dragon and many others. NES was home arcade entertainment for me growing up. Now, there are emulators that allow you to play these same games on your PC and smart devices to relive those moments; but nothing beats the original memories on the NES. Classic.


Jeff Sullivan - Dell TechCenter crew

It’s not that old (2003) but I think the most transitional device for me was the Palm Treo 600 (and later the 650.)  The first real smartphone in my opinion.  Spent hours on it playing low-tech games…  lost numerous stylus’s.  wore it on my hip in a nifty leather phone case…  32MB of Ram AND a .3 MP camera!


  

(Image Courtesy of Wikipedia Commons)

  

Yasushi Osonoi - Dell TechCenter Japan 

CASIO PV-7. In 1983, Microsoft and ASCII(Japanese company) created PC standard called “MSX computer” 8bit computer based on Z80 CPU. I bought CASIO PV-7 , CASIA version of MSX. It was reasonable price like about $300. But the hardware spec is very low compare to current PC. CPU is Z80 with 2 or 3 MHz Main memory is 8KB. No storage (I bought a tape drive w/2000bps finally)  Basic interpreter is build in , so I started programing Basic with this PC. Almost 30 years ago.


Matt Lorimer - Dell TechCenter Rockstar

Best gadget ever was the HP 48GX.  Hands down.  I loved that thing.  I remember flipping through the manual to find how to do whatever we were working on in math class that day.  When I got into Trig, my math teacher wouldn't let me use the built in functions because I "didn't learn anything that way."  After some negotiating, I convinced her to let me use functions that I wrote on my own.  I could then just turn in my code as proof of work.  That was the best math class I ever took.


Alexander Nimmannit - Dell TechCenter Rockstar

Today it's nothing out of the ordinary to carry your entire music library around with you, but in 2001 (back when Apple's iPod was the new kid on the portable digital audio player scene) you could probably count on one hand the ones that had made the jump from MegaBytes to GigaBytes capacity. My good old Panasonic SV-SD80 was not in the high capacity league, rather it was the "world's smallest" digital audio player of its time (with a H x W x D of 1.750'' x 1.688'' x 0.688'' ) and it's because of this that I was able to unobtrusively take it anywhere by wearing it under clothing on its lanyard around my neck. It was not without faults (you had to transcode your audio files onto the included SD card using RealJukebox) but the plethora of included accessories, the size, and the fact that I no longer had to transfer my mp3s over a serial connection more than made up for any faults.


(Image Courtesy of Wikipedia Commons)

 

For different Dell TechCenter members reminisced about Iomega Zip Drives

Joe Garza - Dell Inc - "Those were my Blu-Rays of the 90's."

Matt Vogt - Dell TechCenter Rockstar - "I was the coolest kid in the dorm with my 100MB parallel zip drive!"

Lance Boley - Dell TechCenter crew - "those were cool. I had a SCSI one and it was fantastic, loved it."

Corey Betka - Dell TechCenter Rockstar - "All the cool kids got Jaz drives after their Zip drives were too small."


Rod Trent - Dell TechCenter Rockstar

My absolutely favorite PDA was the HP Jornada.  With the flip-up screen it felt just like a Star Trek communicator.  It came with a carrying case that attached to your belt (like a pager).  It was blue, but if you closed your eyes you could just imagine that you were requesting to be beamed up to the Enterprise.


Stephen Spector - Dell Cloud Evangelist - Dell Cloud on TechCenter

My Palm Zire 72 was my greatest tool for tracking all my contacts, to do lists, etc. In fact, I spent hours perfecting my writing technique to properly create symbols and letters (see help guide on inside of the case). My obsession with handheld info devices began with this product and it sites close to me in my home office. 


Steven Zessin - Dell Lifecycle Controller Wiki 

My first device / computer growing up was a blistering fast Atari 400, which had 4k RAM, hence the model name.  Since it did not have a hard drive, I had to use tapes to load games into memory.  Loading pacman or jump-man only took about an hour to load, ah the good old days. 


Kyle Bader - Friend of Dell TechCenter -  Rio Diamond MP3 player, (1998) featuring 32MB of memory

I remember those too - it's hard to believe that even though they could hold less than 10 songs, people still bought a lot of these!


Do you have a favorite retro gadget?  

Tell us about your vintage gear in the comments or keep the conversation going with us on Twitter @DellTechCenter!

 

PowerEdge M I/O Aggregator Software Update From CMC

$
0
0

The Chassis Management Controller (CMC) has been adding management features for the PowerEdge M I/O Aggregator switch. CMC firmware version 4.2 included the ability to configure credentials, VLANs, SNMP community strings, and syslog server IP addresses. The newest release, CMC firmware version 4.3, includes the ability to update the Aggregator’s software from the CMC.

The image below shows the IOM Firmware and Software page, from which you can perform software updates and rollbacks. This page can be accessed by selecting the I/O Module Overview section of the tree list and then entering the Update tab. The update operation allows you to select a software image, which will then be loaded onto the Aggregator. The rollback operation loads the backup software image onto the Aggregator, causing the active image to become the backup. It is important to note that during both the rollback and update operations, the Aggregator will reboot in order to load the new software image.

 There are six IOM slots available in the chassis, but the IOM Software table will show entries only for those that are populated. Aggregators will display all fields in the table and allow a software operation. However, other network switches do not yet support Software Updates and may not display all fields. Multiple Aggregators may be updated or rolled back at a time. When updating multiple Aggregators simultaneously, the same image will be applied to each of them. The Update Status column describes what stage of the update each Aggregator is in. For a successful update you will see the following sequence of messages.

  • Ready
  • Initializing
  • File Transfer In Progress
  • Firmware Update In Progress
  • Resetting

With the ability to perform software updates, the CMC and Aggregator are becoming Better Together. We are looking forward to continuing this trend by integrating more Aggregator configuration options into the CMC. If you have any feedback on the Aggregator software update process or have suggestions for further integrated configuration options, please leave us a comment below!

For further information regarding the CMC, see the CMC Wiki page.
For further information regarding the IOA, see the IOA Wiki page.
For information regarding CMC firmware version 4.2 IOA configuration features, see this blog post.

iDRAC 7 now supports vConsole Scaling

$
0
0

This blog post is written by Shine KA and Meghna Taneja from Dell iDRAC team.

It is even easier to use Virtual Console with the iDRAC7 1.30.30 firmware release, which includes our scaling solution. Now you can easily view the entire server screen using virtual console without a scroll bar. The picture above shows how multiple servers can be easily managed using the virtual console scaling Solution.

This feature is enabled by default and no configuration is required. With this solution we have eliminated the need to scroll to view entire screen in the iDRAC7 vConsole or change server resolution for every client. The scaling solution is supported on both Active-X and the Java Plugin from all supported clients. 

While launching vConsole, if the server resolution is higher than the client resolution, then vConsole will automatically be scaled down to client resolution and the entire screen will be displayed without any scroll bar. If the server resolution is lesser than client resolution then the vConsole will be launched at the actual screen resolution. Users can perform all Keyboard and Mouse commands and use all vConsole features within this customized vConsole window.

You can also set vConsole to a custom size by resizing vConsole Window. In this case Server resolutions will be resized to fit the vConsole Window and the console will be shown without scroll bar.

After customizing the vConsole window if a user wants to go back to the actual resolution, users can use the “Fit option” on vConsole Window. This can be used to increase or decrease resolution on vConsole Client.

You can make use of this feature if you want to monitor multiple servers.You can set multiple vConsoles to custom sizes and can view all iDRAC sessions at the same time by placing the vConsole windows at different locations on the desktop.

Also, when you want to view one server in detail you can go into full screen mode and perform all required operations and come back to windows mode. When coming back out of full screen mode, the last saved resolution and window position is retained so that all vConsoles are still visible. There is no need to change the resolution and position after a full screen operation.


 Additional Information

Learn more about iDRAC7 at http://www.delltechcenter.com/iDRAC

Dell EqualLogic Configuration Guide (ECG ) version 13.4

$
0
0


Hello EQL Family,

I am proud to announce that the EqualLogic Configuration Guide (ECG)
has been updated to version 13.4.

The EqualLogic Configuration Guide (ECG) is a valuable tool used for planning,
deploying, and managing EqualLogic SANs.

Link to latest version of ECG: 
 http://en.community.dell.com/techcenter/storage/w/wiki/equallogic-configuration-guide.aspx

What’s changed from version 13.3 to the newest version, version 13.4

  • New formatting and section numbering
  • Updated Preparing for Firmware Upgrade and Controller Failover
  • Added note box to reflect 10Gb support only on PS4110 and PS6110 ports
  • Updated for PS-M4110XS and PS65x0 Hybrid Arrays
  • Added note box to reflect no support for Direct Attach Storage (DAS)
  • New Section - Appendix D: Upgrade Paths for EqualLogic PS Series Arrays

The new publishing schedule for the ECG is on a quarterly basis.

This new scheduling provides the following advantages:

  1. Enables the marketing teams to better support content creation and
    coordinate schedules with other organizations.
  2. Allows engineering and marketing teams to better align the creation
    of best practices whitepapers and the ECG publication dates.
  3. Enables the utilization of results from engineering work on the white
    papers to provide more robust content to the ECG.

Look for next update to the ECG in March 2013. 

New Formatting and Section Numbering

The most significant change to the ECG is the reformatting.

The ECG has been reformatted to provide a more logical organization and flow.

The intent of this change is to provide a better, more logical, and more efficient
way to access ECG content.

The changes will also allow us to create independent chapters/sections that
can be accessed and/or downloaded separately.

How will this benefit you?

The new layout will allow you to download only the chapter(s) that you are
interested in, as opposed to having to download the entire ECG.

Coming Soon!!! - Availability of separate downloaded chapters.

Why should I download the latest version of the ECG?

To ensure consistency when referencing a particular chapter and/or section.

Go ahead and click the link to download the latest ECG, version 13.4:

http://en.community.dell.com/techcenter/storage/w/wiki/equallogic-configuration-guide.aspx

 

Four major reference documents from EqualLogic Technical Marketing Team:

  • EqualLogic Compatibility Matrix (ECM):

 http://en.community.dell.com/techcenter/storage/w/wiki/2661.equallogic-validated-components.aspx

  • EqualLogic Configuration Guide (ECG):

 http://en.community.dell.com/techcenter/storage/w/wiki/equallogic-configuration-guide.aspx

  • Rapid EqualLogic Configuration Portal: (RECP)

http://en.community.dell.com/techcenter/storage/w/wiki/3615.rapid-equallogic-configuration-portal-by-sis.aspx

  • Switch Configuration Guide (SCG):

http://en.community.dell.com/techcenter/storage/w/wiki/4250.switch-configuration-guides-by-sis.aspx

If you have recommendations or things that you would like to see in the ECG,
feel free to leave comments or Email me and I would be happy to take those into consideration for the next update.

Twitter:    @GuyAtDell
Email:      guy_westbrook@dell.com

Until next time,

Guy

Click below to access the Storage Infrastructure and Solutions  (SIS) Team publications library.
This has all of our Best Practice White Papers, Reference Architectures, and other EQL documents:

http://en.community.dell.com/techcenter/storage/w/wiki/2631.storage-infrastructure-and-solutions-team.aspx

 

Viewing all 1001 articles
Browse latest View live


Latest Images