EzDev.org

vps interview questions

Top 15 vps interview questions

10108 Jobs openings for vps


Linode Distro (How to Choose?) 64bit? [closed]

I have made the leap to Linode (360MB) and wanted to get some feedback on which distribution to choose. I'm going to be running LAMP (with P being PHP).

I am mainly curious about security, performance, stability and future patching.
Should I go with a 64bit version of the OS or are there drawbacks with that?

Choices ...

Arch Linux 2009.02
Arch Linux 2009.02 64bit
CentOS 5.3
CentOS 5.3 64bit
Debian 5.0
Debian 5.0 64bit
Fedora 11
Gentoo 2008.0
Gentoo 2008.0 64bit
OpenSUSE 11.0
Slackware 12.2
Ubuntu 8.04 LTS
Ubuntu 8.04 LTS 64bit
Ubuntu 9.10
Ubuntu 9.10 64bit

Cheers


Source: (StackOverflow)

Optimizing Apache and MySQL on Linux Xen VPS

I have a Xen virtual private server (VPS) running Ubuntu 8.10, with 128M RAM.

I've found several "how to optimize Apache and MySQL for low-memory VPS" pages via Google, but they provide contradictory information. So I'm asking Server Fault: how does one optimize Apache and MySQL for a low-memory VPS configuration?


A couple of people have suggested using nginx instead of Apache. I'll look into that, but I'd prefer to stick with Apache if possible, just to avoid having to learn all about configuring application stacks on top of an unfamiliar (to me) web server.


Source: (StackOverflow)

Automating server deployment

I find i am constantly settings up pretty much nearly identical servers and VPSs for a number of my clients and it can be very time consuming. Often the only thing that changes between each deployment is the different website that is to be served. Is there an easy way to automate all this and take the boring monotony of setting up 56 identical servers?

The servers i have deployed so far have only been Ubuntu, but it may be possible that i start to use other linux OSs or even Windows. So far i have looked at Capistrano, but it seems to be focused on writing little ruby programs to do the job with, and i have no knowledge at all


Source: (StackOverflow)

Why is my bare-metal 16x 2.93GHz cores computer performing poorer than a VPS with 4x 2.5GHz cores?

I have a written a piece of multi-threaded software that does a bunch of simulations a day. This is a very CPU-intensive task, and I have been running this program on cloud services, usually on configurations like 1GB per core.

I am running CentOS 6.7, and /proc/cpuinfo gives me that my four VPS cores are 2.5GHz.

processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
stepping        : 2
microcode       : 1
cpu MHz         : 2499.992
cache size      : 30720 KB
physical id     : 3
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 3
initial apicid  : 3
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good unfair_spinlock pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm arat xsaveopt fsgsbase bmi1 avx2 smep bmi2 erms invpcid
bogomips        : 4999.98
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

With a rise of exchange rates, my VPS started to be more expensive, and I have came to a "great deal" on used bare-metal servers.

I purchased four HP DL580 G5, with four Intel Xeon X7350s each. Basically, each machine has 16x 2.93GHz cores and 16GB, to keep things like my VPS cloud.

processor       : 15
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU           X7350 @ 2.93GHz
stepping        : 11
microcode       : 187
cpu MHz         : 1600.002
cache size      : 4096 KB
physical id     : 6
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 27
initial apicid  : 27
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca lahf_lm dts tpr_shadow vnmi flexpriority
bogomips        : 5866.96
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Essentially it seemed a great deal, as I could stop using VPS's to perform these batch works. Now it is the weird stuff...

  1. On the VPS's I have been running 1.25 thread per core, just like I have been doing on the bare metal. (The extra 0.25 thread is to compensate idle time caused by network use.)
  2. On my VPS, using in total 44x 2.5GHz cores, I get nearly 900 simulations per minute.
  3. On my DL580, using in total 64x 2.93GHz cores, I am only getting 300 simulations per minute.

I understand the DL580 has an older processor. But if I am running one thread per core, and the bare metal server has a faster core, why is it performing poorer than my VPS?

I have no memory swap happening in any of the servers.

TOP says my processors are running at 100%. I get an average load of 18 (5 on VPS).

Is this going to be this way, or am I missing something?

Running lscpu gives me 1.6GHz on my bare metal server. This was seen on the /proc/cpuinfo as well.

Is this information correct, or is it linked to some incorrect power management?

[BARE METAL] $ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                16
On-line CPU(s) list:   0-15
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 15
Stepping:              11
**CPU MHz:               1600.002**
BogoMIPS:              5984.30
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
NUMA node0 CPU(s):     0-15


[VPS] $ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Stepping:              2
**CPU MHz:               2499.992**
BogoMIPS:              4999.98
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0-3

Source: (StackOverflow)

Uses for a small Virtual Private Server?

I rented a small VPS (~130MB of RAM) to run an IRC bot. The bot is no longer needed so I have a VPS until the billing period ends.

I also have shared web hosting but can anyone think of what a VPS might be useful for that can't be done on shared hosting?

I'm a developer for both web and desktop apps.

Suggestions for larger VPS's also welcome.


Source: (StackOverflow)

Are vCPU the same as 1 Socket, or a single Core?

Currently I have a Hyper-v VPS with 2 (vCPU) processor. I would like to install SQL Server Express 2012 that has a limitation of 1 Socket or 4 Cores, whichever the lesser.

My Question: Are vCPU's counted as a single core, or as sockets? or something completely different?


Source: (StackOverflow)

Who is your favorite VPS Provider? [closed]

Who is your favorite virtual hosting provider? I'm looking for your thoughts on SliceHost, Dreamhost VPS, Linode, 1and1 VPS, etc and why you like the particular provider you named.

Thanks!


Source: (StackOverflow)

Linux says my space is full with 2.4/50gb used

Today I ran across a problem and I'm not sure if it's a wrong configuration of my hosting provider, because I haven't changed anything about the file system.

df -h says:

df -h
Filesystem Size Used Avail Use%  Mounted on
/dev/simfs 50G  2.4G  0    100%  /

It says it's 100% used, but only 2.4G of 50 are really in use. I've also tried to delete some logfiles which were big, but it didn't help.

I've also checked with "du -sh *" if there's anything big, but couldn't find anything large.

Anyone has an idea?

//edit: There are enough INodes free.

df -hi
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/simfs        25M  137K   25M    1% /

//edit: Complete output:

df -T
Filesystem     Type     1K-blocks    Used Available Use% Mounted on
/dev/simfs     simfs     52428800 2127284         0 100% /
none           devtmpfs    262144       4    262140   1% /dev
none           tmpfs        52432      56     52376   1% /run
none           tmpfs         5120       0      5120   0% /run/lock
overflow       tmpfs         1024       0      1024   0% /tmp
none           tmpfs       209700       0    209700   0% /run/shm
none           tmpfs       102400       0    102400   0% /run/user

//edit: Permissions:

ls -la /dev/simfs
brw------- 1 root root 144, 149 Aug 14 00:01 /dev/simfs

Source: (StackOverflow)

How to save and exit crontab -e?

How to save and exit crontab -e?

i tried every method listed here and none works, i have a centos 5, vi comes by default with yum and i installed nano

Solved

just changed the default editor

export EDITOR=nano

and now i can do what I do using nano :) thanks everyone and yes i should learn Vi.. someday!!!


Source: (StackOverflow)

Optimize apache/php/mysql running on VPS for heavy load

Question about optimizing an apache/mysql server on a VPS with 512m of RAM. Under normal load everything runs fast, no connection lag. However when we get our heavy traffic days (50k+ visits) the site crawls and it takes 30 seconds+ to get content back from apache.

The site is running on Expression Engine (CMS) (in PHP) and I've followed their heavy-load optimization guide. I've googled and followed quite a few out there for apache with some luck, getting it to where it is now, but I need to get constant response times.

I assume this is different from the 'optimize for low memory' question on here as I have enough RAM (for what I'm trying to do), I just need to get the server to not choke under heavy load.

Any recommondations?


Source: (StackOverflow)

Benefits to having a sole email VPS server

I have around 5 websites hosted in a VPS server, and for some reason I bought another VPS server recently just to host the emails for those 5 websites. I would like to know the pros and cons of having a separate email server isolated from its web host.

I initially did it, to avoid having multiple software's in one server running. So, the likes of PostFix, Dovecot ... would not share resources and slow down mysqld, php-fpm ... but since I am a noob, I have no knowledge to back this assumption.


Source: (StackOverflow)

What is the difference between a cloud server a virtual server and a dedicated server?

What exactly is the difference between a VPS (Virtual Private Server), a Cloud Server, and a Dedicated Server? I'm having trouble finding a concise explanation that isn't littered with advertising.


Source: (StackOverflow)

How to protect against loss of server on a budget

I'm a small company on not much budget providing websites and databases for charity and not-for-profit clients.

I have a few Debian Linux VPS servers and ensure I have daily backups to a different VPS than the one the service is hosted on.

Recently one of my hosting companies told me two drives failed simultaneously and so that data was lost forever. Stuff happens, they said sorry, what else could they do? But it made me wonder about cost-effective ways to basically get a VPS up again in the event of a hardware or other host-related failure.

Currently I would have to

  1. Spin up a new VPS
  2. Get the last day's backup (which includes databases, web root and website-specific config) over onto the VPS, and configure it like the last one etc.
  3. Update DNS and wait for it to propagate.

It would probably take a day or so achieve this, with the DNS propagation being a big unknown, although I have the TTL set quite low (hour or so).

Some hosts provide snapshots which can be used to replicate a set up to a new VPS, but there's still the IP and this doesn't help in the case that the host company cancels/suspends an account outright (I've been reading about this behaviour from certain hosting providers and it's scared me! I'm not doing anything spammy/dodgy and keep a close eye on security, but I realise that they literally have the power to do this and I'm quite risk averse).

Is this, combined with choosing reputable hosts, the best I can do without going for an incredibly expensive solution?


Source: (StackOverflow)

Why would I need a firewall if my server is well configured?

I admin a handful of cloud-based (VPS) servers for the company I work for.

The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting)

Clearly on here people are forever asking about configuring firewalls and such like.

I use a bunch of approaches to secure the servers, for example (but not restricted to)

  • ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc
  • https, and restricted shells (rssh) generally only from known keys/ips
  • servers are minimal, up to date and patched regularly
  • use things like rkhunter, cfengine, lynis denyhosts etc for monitoring

I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc.

Put aside for a moment the issues of physical security of the VPS.

Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers.

To date (touch wood) I've never had any problems with security but I am not complacent about it either.


Source: (StackOverflow)

Why is the chroot_local_user of vsftpd insecure?

I'm setting up on my VPS a vsftpd, and i don't want users be allowed to leave they're ftp home directory. I'm using local_user ftp, not anonymous, so I added:

chroot_local_user=YES

I've read in a lot of forum post, that this is unsecure.

  1. Why is this unsecure?
  2. If this is unsecure because of using ssh to join to my VPS as well, then I could just lock out these users from sshd, right?
  3. Is there an other option for achiving this behaviour of vsftpd? ( I dont want to remove read permissions on all folder/files for "world" on my system )

Source: (StackOverflow)