EzDev.org

hardware interview questions

Top 15 hardware interview questions

4269 Jobs openings for hardware


How to find out details about hardware on the Linux machine?

-i.e - how to get a full list of hardware components in command line (on a machine with no window system)

Thank you.


Source: (StackOverflow)

Does a 500 watt power supply always use 500 watts of electricity? [closed]

Does a 500 watt power supply always pull 500 watts? Or does it depend on the load being placed on the computer?

It's a n00b hardware question. I'm trying to figure out how much it costs to run my compuer without buying a meter that actually measures power usage.


Source: (StackOverflow)

SSD head / cylinder / cluster details

A customer of ours makes industrial robots that run on very old, but stable, hardware and software. The only bottleneck has always been the hard drive in these moving machines. Due to constant movement (shocks etc.) HDDs normally don't survive beyond six months.

So now we're trying to connect an SSD. The motherboard doesn't have a SATA connection (no surprise there) so we're using a SATA-to-IDE converter to connect it to the IDE port on the motherboard. This works and the BIOS recognizes the drive.

Only problem is that it won't boot. It freezes on POST. In the BIOS (from the 1990s), we need to specify some values, called 'HEADS', 'SYL', 'CLUSTER', and 'LANDZ'. Unlike traditional HDDs, this drive obviously has no platters. Is there a way the drive mimics these things on IDE and can we somehow find out what these values should be for our specific drive? We have changed the values at random and sometimes it passes POST, sometimes it doesn't. If it does, however, it still doesn't boot and just says there's no drive connected.

In short, does anyone have any experience connecting a SATA SSD to an old IDE motherboard and what can we do to make this work (if anything)?


Source: (StackOverflow)

What is this wiring panel and what are these Ethernet "ports?"

In my office's building there is a stone-age LAN rack with some Ethernet ports I've never seen before. I need to find the name of this ports, if they have one, and then buy some cables or adapters.

Unfortunately, I'm not allowed to dismount the whole thing and connect the cables to a normal RJ45 rack.

All the cables connected to the front of the rack have a RJ45 male on the other end. On the rack I can read AT&T 110DW2-100. I checked the cables, no hints on them.

Here you can see a pic of the ports and some cables connected to the switch:

enter image description here

Does anyone know the name of these ports?


Source: (StackOverflow)

High Failure Rate of Large Drives?

I recently deployed a server with 5x 1TB drives (I won't mention their brand, but it was one of the big two). I was initially warned against getting large capacity drives, as a friend advised me that they have a very low MTBF, and I would be better getting more, smaller capacity drives as they are not 'being pushed to the limit' in terms of what the technology can handle.

Since then, three of the five disks have failed. Thankfully I was able to replace and rebuild the array before the next disk failed, but it's got me very very worried.

What are your thoughts? Did I just get them in a bad batch? Or are newer/higher capacity disks more likely to fail than tried and tested disks?


Source: (StackOverflow)

Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on? [closed]

We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc.

These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware.

I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.


Source: (StackOverflow)

Do I need to RAID Fusion-io cards?

Can I run reliably with a single Fusion-io card installed in a server, or do I need to deploy two cards in a software RAID setup?

Fusion-io isn't very clear (almost misleading) on the topic when reviewing their marketing materials Given the cost of the cards, I'm curious how other engineers deploy them in real-world scenarios.

I plan to use the HP-branded Fusion-io ioDrive2 1.2TB card for a proprietary standalone database solution running on Linux. This is a single server setup with no real high-availability option. There is asynchronous replication with a 10-minute RPO that mirrors transaction logs to a second physical server.

Traditionally, I would specify a high-end HP ProLiant server with the top CPU stepping for this application. I need to go to SSD, and I'm able to acquire Fusion-io at a lower price than enterprise SAS SSD for the required capacity.

  • Do I need to run two ioDrive2 cards and join them with software RAID (md or ZFS), or is that unnecessary?
  • Should I be concerned about Fusion-io failure any more than I'd be concerned about a RAID controller failure or a motherboard failure?
  • System administrators like RAID. Does this require a different mindset, given the different interface and on-card wear-leveling/error-correction available in this form-factor?
  • What IS the failure rate of these devices?

Edit: I just read a Fusion-io reliability whitepaper from Dell, and the takeaway seems to be "Fusion-io cards have lots of internal redundancies... Don't worry about RAID!!".


Source: (StackOverflow)

Why are enterprise SAS disk enclosures seemingly so expensive?

I will begin by stating that I do not believe this is a duplicate of Why is business-class storage so expensive?.

My question is specifically about SAS drive enclosures, and justifying their expense.

Examples of the types of enclosures I'm referring to are:

  • 1 HP D2700
  • 2 Dell MD1220
  • IBM EXP3524

Each of the above is a 2U direct attached external SAS drive enclosure, with space for around 24 X 2.5" drives.

I'm talking about the bare enclosure, not the drives. I am aware of the difference between enterprise class hard drives and consumer class.

As an example of "ball-park" prices, the HP D2700 (25 X 2.5" drives) is currently around $1750 without any drives (checked Dec 2012 on Amazon US). A low end HP DL360 server is around $2000, and that contains CPU, RAM, motherboard, SAS RAID controller, networking, and slots for 8 X 2.5" drives.

When presenting clients or management with a breakdown of costs for a proposed server with storage, it seems odd that the enclosure is a significant item, given that it is essentially passive (unless I am mistaken).

My questions are:

  1. Have I misunderstood the components of a SAS drive enclosure? Isn't it just a passive enclosure with a power supply, SAS cabling, and space for lots of drives?

  2. Why is the cost seemingly so expensive, especially when compared to a server. Given all the components that an enclosure does not have (motherboard, CPU, RAM, networking, video) I would expect an enclosure to be significantly less expensive.

Currently our strategy when making server recommendations to our clients is to avoid recommending an external drive enclosure because of the price of the enclosures. However, assuming one cannot physically fit enough drives into the base server, and the client does not have a SAN or NAS available, then an enclosure is a sensible option. It would be nice to be able to explain to the client why the enclosure costs as much as it does.


Source: (StackOverflow)

Consumer (or prosumer) SSD's vs. fast HDD in a server environment

What are the pro's and con's of consumer SSDs vs. fast 10-15k spinning drives in a server environment? We cannot use enterprise SSDs in our case as they are prohibitively expensive. Here's some notes about our particular use case:

  • Hypervisor with 5-10 VM's max. No individual VM will be crazy i/o intensive.
  • Internal RAID 10, no SAN/NAS...

I know that enterprise SSDs:

  1. are rated for longer lifespans
  2. and perform more consistently over long periods

than consumer SSDs... but does that mean consumer SSDs are completely unsuitable for a server environment, or will they still perform better than fast spinning drives?

Since we're protected via RAID/backup, I'm more concerned about performance over lifespan (as long as lifespan isn't expected to be crazy low).


Source: (StackOverflow)

Is it necessary to burn-in RAM for server-class hardware?

Considering the fact that many server-class systems are equipped with ECC RAM, is it necessary or useful to burn-in the memory DIMMs prior to their deployment?

I've encountered an environment where all server RAM is placed through a lengthy burn-in/stress-tesing process. This has delayed system deployments on occasion and impacts hardware lead-time.

The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant.

Is this a useful exercise? In my past experience, I simply used vendor RAM out of the box. Shouldn't the POST memory tests catch DOA memory? I've responded to ECC errors long before a DIMM actually failed, as the ECC thresholds were usually the trigger for warranty placement.

  • Do you burn-in your RAM?
  • If so, what method(s) do you use to perform the tests?
  • Has it identified any problems ahead of deployment?
  • Has the burn-in process resulted in any additional platform stability versus not performing that step?
  • What do you do when adding RAM to an existing running server?

Source: (StackOverflow)

HP ProLiant DL360 G7 hangs at "Power and Thermal Calibration" screen

I have a new HP ProLiant DL360 G7 system that is exhibiting a difficult-to-reproduce issue. The server randomly hangs at the "Power and Thermal Calibration in Progress..." screen during the POST process. This typically follows a warm-boot/reboot from the installed operating system.

enter image description here

The system stalls indefinitely at this point. Issuing a reset or cold-start via the ILO 3 power controls makes the system boot normally without incident.

When the system is in this state, the ILO 3 interface is fully accessible and all system health indicators are fine (all green). The server is in a climate-controlled data center with power connections to PDU. Ambient temperature is 64°F/17°C. The system was placed in a 24-hour component testing loop prior to deployment with no failures.

The primary operating system for this server is VMWare ESXi 5. We initially tried 5.0 and later a 5.1 build. Both were deployed via PXE boot and kickstart. In addition, we are testing with baremetal Windows and Red Hat Linux installations.

HP ProLiant systems have a comprehensive set of BIOS options. We've tried the default settings in addition to the Static high-performance profile. I've disabled the boot splash screen and just get a blinking cursor at that point versus the screenshot above. We've also tried some VMWare "best-practices" for BIOS config. We've seen an advisory from HP that seems to outline a similar issue, but did not fix our specific problem.

Suspecting a hardware issue, I had the vendor send an identical system for same-day delivery. The new server was a fully-identical build with the exception of disks. We moved the disks from the old server to the new. We experienced the same random booting issue on the replacement hardware.

I now have both servers running in parallel. The issue hits randomly on warm-boots. Cold boots don't seem to have the problem. I am looking into some of the more esoteric BIOS settings like disabling Turbo Boost or disabling the power calibration function entirely. I could try these, but they should not be necessary.

Any thoughts?

--edit--

System details:

  • DL360 G7 - 2 x X5670 Hex-Core CPU's
  • 96GB of RAM (12 x 8GB Low-Voltage DIMMs)
  • 2 x 146GB 15k SAS Hard Drives
  • 2 x 750W redundant power supplies

All firmware up-to-date as of latest HP Service Pack for ProLiant DVD release.

Calling HP and trawling the interwebz, I've seen mentions of a bad ILO 3 interaction, but this happens with the server on a physical console, too. HP also suggested power source, but this is in a data center rack that successfully powers other production systems.

Is there any chance that this could be a poor interaction between low-voltage DIMMs and the 750W power supplies? This server should be a supported configuration.


Source: (StackOverflow)

Poor internal database - replace it or chuck hardware at it?

So - we have an internal company database, the usual kind of stuff: manages clients, phone calls, sales deals and client agreements/schemes.

It's an Access 2000 front-end, and an SQL Server 2000 Standard back-end. Single server, dual Xeon 3.2GHz, 2GB RAM, Windows Server 2003, gets about 40% CPU load all day, spread across the 4 cores visible to the OS (HT).

The back-end database is poorly designed, and has organically grown over 10+ years, maintained by less-than-skilled individuals. It is badly normalised, and some of the obvious problems include tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins for some of the most heavily used parts of the system (e.g. a call manager application that sits on everyone's second monitor for 8 hours a day and runs a big inefficient query every few seconds).

The front-end is not much better, it's the typical mess of hundreds of forms, nested saved queries, poorly written embedded SQL in the VBA code, dozens of "quirks" etc, and whenever a change is made something unrelated seems to break. We have settled on one MDB that works "well enough" and now have a no-change policy on that as we have no Access heavyweights in-house (and no plans to hire one either).

The company is now slowly growing, increasing numbers of clients, calls etc, as well as a modest increase in the number of concurrent users, and performance has been getting noticeably worse just recently (waiting to move between forms, waiting for lists to populate etc)

Perfmon says:

  • Disk transfers per second: between 0 and 30, average 4.
  • Current disk queue length: hovers around 1

SQL Server's profiler sees hundreds of thousands of queries every minute. CPU usage on the clients is pretty much zero, indicating it's waiting on server-side queries to execute. I have put this workload through the DB Engine Tuning Advisor, applied its suggestions to a test backup, but this hasn't really made much difference.

By the way, we have a mix of 100MB and gigabit ethernet, all on one subnet, 40 ish users across two floors.

To the question.

As I see it we have two choices to resolve/improve this situation.

  • We can scrap it and replace it with an entirely new CRM system, either bespoke or part bespoke
  • We can extend the life of this system by chucking hardware at it.

We can build an Intel i7 system with crazy performance numbers for an order of magnitude less cost than replacing the software.

When a new system is eventually developed, it can be hosted on this box, so there's no wasted hardware. A new CRM system keeps getting put off, and off, and off - I don't see that happening for at least a year.

Any thoughts on this situation, especially if you've been here yourself, would be most appreciated.

Thanks


Source: (StackOverflow)

Where did "Wait 30 seconds before turning it back on" come from?

I guess these are the kinds of things I think about on the weekend...

When I was growing up (not that long ago) my parents always taught us to wait 30 seconds after shutting down the computer before turning it back on again.

Fast forward to today in professional IT, and I know a good number of people that still do the same.

Where did the "30 second" rule come from? Has anyone out there actually caused damage to a machine by powering it off and on within a few seconds?


Source: (StackOverflow)

HP plan to restrict access to ProLiant server firmware - consequences?

I've been a longtime advocate for HP ProLiant servers in my system environments. The platform has been the basis of my infrastructure designs across several industries for the past 12 years.

The main selling points of ProLiant hardware have been long-lasting product lines with predictable component options, easy-to-navigate product specifications (Quickspecs), robust support channels and an aggressive firmware release/update schedule for the duration of a product's lifecycle.

This benefits the use of HP gear in primary and secondary markets. Used and late-model equipment can be given a new life with additional parts or through swapping/upgrading as component costs decline.

One of the unique attributes of HP firmware is the tendency to introduce new functionality along with bugfixes in firmware releases. I've seen Smart Array RAID controllers gain new capabilities, server platforms acquire support for newer operating systems, serious performance issues resolved; all through firmware releases. Reading through a typical changelog history reveals how much testing and effort goes into creating a stable hardware platform. I appreciate that and have purchased accordingly.

Other manufacturers seem to ship product as-is and only focus on correcting bugs in subsequent firmware releases. I rarely run firmware updates on Supermicro and Dell gear. But I deem it irresponsible to deploy HP servers without an initial firmware maintenance pass.


Given this, the early reports of an upcoming policy change by HP regarding server firmware access were alarming...

The official breakdown:

enter image description here

Access to select server firmware updates and SPP for HP ProLiant Servers will require entitlement and will only be available to HP customers with an active contractual support agreement, HP Care Pack service, or warranty linked to their HP Support Center User ID. As always, customers must have a contract or warranty for the specific product being updated.

Essentially, you must have active warranty and support on your servers in order to access firmware downloads (and presumably, the HP Service Pack for ProLiant DVD).

This will impact independent IT technicians, internal IT and customers who are running on older equipment the most, followed by people seeking deals on used HP equipment. I've provided many Server Fault answers that boil down to "updating this component's firmware will solve your problem". The recipients of that advice likely would not have active support and would be ineligible for firmware downloads under this policy.

  • Is this part of a growing trend of vendor lock-in? HP ProLiant Gen8 disk compatibility was a precursor.
  • Is HP overstepping bounds by restricting access to updates that some people have depended upon?
  • Will the result be something like the underground market for Cisco IOS downloads?
  • How does this sit with you, your organization or purchase decision makers? Will it impact future hardware decisions?
  • Is this any incentive to keep more systems under official warranty or extend Care Packs on older equipment?
  • What are other possible ill-effects of this policy change that I may not have accounted for?

Update:
A response on the HP Support Services Blog - Customers for Life

Update:

This is in effect now. I'm seeing the prompt when trying to download BIOS updates for my systems. A login using the HP Passport is not necessary to proceed with the download.

enter image description here


Source: (StackOverflow)

Reliability of ssd drives

The main advantage of SSD drives is better performance. I am interested in their reliability.

Are SSD drives more reliable then normal hard drives? Some people say they must be because they have no moving parts, but I am concerned about the fact that this is a new technology that is possibly not completely matured yet.


Source: (StackOverflow)