SteamOS on Lenovo T510 with Nvidia Optimus

February 13, 2015 Leave a comment

SteamOS is still in beta but releases come out and I found the concept interesting.  However, you need a PC with UEFI BIOS and a newer Nvidia or ATI card (although I think an Intel may work).  SteamOS is basically meant to run a Home Theater PC in your livingroom and/or be a gaming console with a controller (Xbox controller can work here if you have the right model/driver).  I was more curious than anything but it wasn’t easy to get this running on the T510.  This is just to share what it took and to bookmark a place for me to come back to (since I have other purposes for this computer, namely openstack testing with Mirantis).

The Lenovo T510 I used has an Nvidia NVS 3100T card and the i5 processor.  Lenovo, and many other manufacturers, did not add UEFI until 2012 (meaning that it is in the T520 but not the T510).  Furthermore, the Nvidia Linux driver dropped support for many older cards (like the 3100T) after the 340.x driver version (I think it’s on 343 right now?)  Steam builds the driver in their Debian-based installer and if dmesg shows NVIDIA, it will assume you want the new drivier.  The issue is, automation breaks because it will prompt about an unsupported driver, so the automated install always will fail.  In any case, here’s how you do this.

WARNING: This wipes out all data on the drive!!!

  1. Get the ISO version of the install, you can do this yourself or get a precompiled copy.  It’s just easier to make a DVD here (because USB boot is based on UEFI and you have to fake the CDROM being the USB…trust me, just burn a DVD for this).
    1. Get the ISO by going the the steam forum and checking for a new sticky on a release and burn a DVD – http://steamcommunity.com/groups/steamuniverse/discussions/1/
  2. Boot up the laptop with DVD and let it install (you can select expert).  Midway through the core install it will error and have a message about trying it 5 times again, You need to just click back until it gives you an option for continue, putting you back on a menu screen.  At this point we jump to tty2 by pressing CTRL-ALT-F2 and perform the following commands:
    1. chroot /target /bin/bash
    2. apt-get install -f
      1. If you get the nvidia installer ignore the next step
    3. apt-get install nvidia-*
      1. When the installer asks to install an unsupported card, say yes, you’ll get two more installers ask questions, select the obvious choices
  3. Hit CTRL-ALT-F5 (I think) to go back to the installer and select install core components (or the selection it should already be on). It will warn of a dirty install, say it’s ok and wait. It should complete the installation (maybe give a warning but it’s fine)
  4. It should boot into a gnome desktop where you need to enable the network, it will update steam and reboot.
  5. It will boot to the GRUB menu and should backup the system partition (select this if it doesn’t do it automatically). upon the next reboot, hit ESC after the lenovo screen (be quick) to get the GRUB menu to show.  Select the (recovery mode option) and it should boot to command line.
  6. At boot, login with desktop/desktop and we will be following most of this guide. but we need to have network enabled first and this is not up on recovery (at least correctly).
    1. login if you’re not already with desktop/desktop
      1.  you may need to set the password using the “passwd” command after login (make it: desktop)
      2. change the Xorg to GDM3 by running the following
        1. sudo dpkg-reconfigure lightdm
          1. select gdm3
        2. sudo passwd steam
          1. set it to “steam”
        3. sudo reboot
    2. Hit ESC after the Lenovo splash screen, keep trying until you see a legacy graphic backsplash of SteamOS and the Grub menu should appear after a few seconds
      1. Select the normal boot option this time and let it boot into the gnome desktop
      2. once you’re in (you may have to select STEAMOS for login), hit CTL-ALT-F2 for the command line
      3. Logon with desktop/desktop and proceed to step C
    3. install compilers for headers
      1. sudo apt-get install build-essential linux-headers-$(uname -r)
      2. wget http://us.download.nvidia.com/XFree86/Linux-x86_64/340.76/NVIDIA-Linux-x86_64-340.76.run
      3. sudo chmod +x NVIDIA-Linux-x86_64-340.76.run
      4. sudo apt-get –purge remove xserver-xorg-video-nouveau nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-smi
      5. sudo apt-get remove –purge nvidia-*
      6. sudo nano /etc/modprobe.d/disable-nouveau.conf
        1. # Disable nouveau
          blacklist nouveau
          options nouveau modeset=0
        2. add the lines above and save this new file (CTRL-X then Y then ENTER)
      7. sudo dpkg-reconfigure lightdm
        1. select lightdm
      8. sudo reboot
    4. Hit ESC after the Lenovo splash screen (we’re going back to recovery) and select the recovery option in Grub (should be on bash already)
      1. cd /home/desktop (I think, might be steamos)
      2. ls -la (see if the NVIDIA file is there, if not, find it)
      3. sudo /etc/init.d/lightdm stop
      4. sudo /etc/init.d/gdm3 stop
      5. sudo ./NVIDIA-Linux-x86_64-340.76.run
        1. ACCEPT EULA, say YES to DKMS, YES to 32bit compat, YES to Xorg config and click OK
      6. sudo reboot
    5. Boot to normal mode and wait (you may have a big cursor and be waiting for over 10 minutes).  I had dinner, came back and I was up and running

Hope this works for some of you who want to test it!

This problem has been raised so hopefully it will be addressed in the final release

https://github.com/ValveSoftware/SteamOS/issues/163

http://paste.ubuntu.com/7972356/

PS – I noticed a few other interesting places discussing this which I haven’t tried

1) http://steamcommunity.com/groups/steamuniverse/discussions/1/648814395823405268/

2) GitHub for non-UEFI boot (no legacy nvidia yet) – https://github.com/directhex/steamos-installer

Categories: Hardware, Linux Tags: , ,

Lightbulb moment with Docker

February 5, 2015 2 comments

I’ve heard the word, played with Docker 101 and still was left wondering why this technology is important.  After all, we have VMs we can run applications on and they obviously support a lot more apps than Docker does.  Something drew me back though to actually spin up my own machine with a real Docker Engine and get my hands dirty.

Just a recap to explain the difference between containers and VMs.  VMs are full Guest OS running their own libraries and executable, etc.  They should be pretty familiar concepts to most people at this point.  You can use orchestration and provisioning to stand up the OS, Hypervisor, guest OS, and applications.  Normally, many of us on the Tech Ops side of the house are used to doing it the old fashioned way, we have our shortcuts but we’re still touching or tweaking a few things.  Those of us on larger deployments lean much more towards automation, but it’s often overkill for smaller deployments. Let’s place ourselves in the automated camp and say we have templates for the OS, we can provision on bare metal and even push and configure applications.

Containers, on the other hand, sit on top of an engine (Docker, in this case). You have a base OS and then Docker.  Docker isn’t a hypervisor, it does not map hardware virtually as a standard OS does.  It will take applications (and an OS) and allow them to share resources (mainly libraries and bins) with some, all or isolated from each other.  The biggest thing to keep in mind here is persistence.  When Docker updates a container, it doesn’t apply a patch or update, it rebuilds the entire container.  Sure, you can modify in a container, but that isn’t docker, that’s you doing this.  When you want to update or improve, you’ll lose your changes.  What you ought to do is use the Dockerfile to add the applications and components you need.

First thought I had, is how the heck are you supposed to use a database then?  Can’t really afford to wipe that data. The answer is (and pardon the poor terminology) is mapping a container to a volume on the host (can the host see it? Good, you can have a container hit it). Hope that makes sense, so keeping this in mind, I still struggled to find why Docker matters.

In a past life, I spent over a year on a development team (I had little business being there) but techops didn’t like me because I kept thinking like a developer.  So before DevOps was coined, I was a TechOps person planted into Development and it was great.  I learned a lot by managing deployments and watching the process of SVN and code push.

Now, developers usually checkin code they develop on their machine.  Code is supposed to be merged and then run in a test environment (if one exists).  Then it’s deployed in Production (I’m simplifying this).  Most modern methods of Agile development involve streamlining the code deployment (so developer 1 isn’t waiting for developer 2 to finish so they can work).  Also that code should be complete and automated testing should be used.

I know the big problem coming from devops. It’s the environment.  While some shops can afford multiple copies of Prod, they always lack things.  Let’s say we isolate DEV and copy it from PROD.  It’s stale the second you’re done, often it’s not used, there are differences in testing and eventually it’s flushed down the toilet and rebuilt.  Automation can fix this, cloning environments and data is great.  Technology can’t fix people issues though.

With this in mind, Docker clicked for me.  The issue with most environments is the tweaks or the shadow IT that adds things after a deployment.  Every time I heard the word “done” I ask again and usually find out that the person means 90-99.9% done, which is not done, only 100% is done.  Dockers solves this because you deploy on the same engine and if you changed something, you’re caught. It’s rebuilt EVERYTIME.  This is a GOOD thing.  This means that Dev, QA, Preprod and prod are the same and the infrastructure doesn’t get stale because there isn’t much to manage.

Developers like this because there is no change to the infrastructure, ops should like this because it takes them out of the blame.  I think most will not like giving up control, but if you think hard about this, they aren’t, there is nothing to control.  You can backup the container data and have that persist, but just as easily clone and restore if needed.

I have a lot further to go, I’m just a beginner but I thought I’d share how I’m getting some uses for Docker and I really do see it as the future (no clue what date that will come).  BTW, Docker runs on VMware, generic Linux, AWS, Google Cloud and many more.  If the Engine is the same, the apps under it can be used on whatever platform you have.  Think of the savings in management on all those Guest OSes.  What about the licensing and support?!?

Granted, not everything runs on Docker, but everyday more does.  If I were a vendor, I’d be integrating it to utilize Docker before your competitor does.

PS – I didn’t even edit this thing so please pardon the grammar or runons.

Categories: DevOps, Docker Tags: , , ,

I’ve got problems but 99.999 (five nines) storage isn’t one of them

October 7, 2014 Leave a comment

I recently have been in front of a few customers discussing various designs for application and desktop virtualization.  Inevitably, or at some point, we discuss storage.  When it comes to storage I often pause and read the room because most people i know on the VAR and customer side have their favorites and have what I would refer to as a Dallas Cowboys team (I’m an Eagles fan, if you are a Dallas fan, just reverse the teams, it’ll work).

I’ve architected (is that a real word?) large deployments involving multiple datacenters, high availability and disaster recovery. My focus isn’t on what is the single best technology and gluing things together, it’s about what works (and hopefully, what works well).  Storage can be a very big issue with VDI, traditional SAN-based storage was not designed for desktop workloads and we’ve been oblivious to faster disk speeds and low latency on drives that hum under our wrists when typing.  Moving these workloads to the data center doesn’t always work and when you add in latency from a server reaching out to a separate SAN, it compounds the problem.

The traditional SAN isn’t usually the best fit for heavy desktops and applications, however, adding flash technology to the mix often deals with the IOPs issue and latency can be minimized.  Is flash necessary?  Nope.  I’ve had designs involving 15K SAS drives local to blades work very well.  The Citrix stream to memory, overflow to disk can perform even better with 10k or 7k drives.  However, I often don’t get to position that solution which brings me back to my first point…everyone has favorites.

I can take almost any storage and find a solution.  Even a traditional SAN, if I can use memory to cache, I can make that work.  Local disk? Easy.  Flash appliances, they are great!  But there is one thing I’m hearing that I don’t need.  The storage providing high availability or five nines.  There is a simple reason I don’t need five nines and I cringe when I hear others use it and lean back.

Your application doesn’t solely rely on storage to be available!

How will five nines prevent downtime when your hypervisor crashes or profile corruption occurs?  What about a failed backup on SQL that just eats up disk space?  What should we do?

We need to embrace failure and assume things fail.  It’s so much cheaper than having the hardware give you a warm fuzzing feeling.  When that business app fails, the business doesn’t care whether it’s storage or a cleaning person tripping over a server cord (I hope that isn’t even possible in most of your environments!). They see IT as the failure, not storage.

I wish I could take credit for this thought process but netflix has pretty much perfected this thought.  If you haven’t heard of the chaos monkey you should learn – http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html .

Spend enough time in IT and you’ll realize that chaos always wins and you burn out quick if you’re fighting it.  However, returning to my original point, the design and architecture can do this also.  When we talk of desktops, many argue persistent versus non-persistent.  Persistent means you keep your desktop, non-persistent means you can roam (which usually means some flavor of roaming profiles).  I’m a big advocate of non-persistent.  Your storage or server fails, you get logged off, you log back in and you’re right where you were (or very close to it).  If the application is database driven and supports mirroring, you can survive storage failures, if setup correctly.

Going back to storage, this means two of whatever I have.  Two local drives, two appliances, two SANs.  I’ll take two 95% up-time appliances over a single 99.999% appliance anytime.  I’d rather save costs with single controllers than try to make a single point of failure not fail (because your application never has a single point of failure, it’s got multiple points of failure).

I’m not arguing five nines doesn’t have a place somewhere.  If you can’t use non-persistent, it might be for you.  However, I’d argue that virtualizing your applications and desktops is not a good move if you need persistence anyways.  Just my two cents, feel free to comment if you agree, disagree or think I’m full of it, I’m always open to suggestions!

PS – This is a first draft to publish, I’m sure there some typos and run-on sentences in there.

Categories: Citrix, microsoft, vmware Tags: ,

InfoBlox and Citrix Issues [RESOLVED]

May 15, 2014 1 comment

I have heard a lot on Infoblox issues with Citrix and had the chance to meet some of the Infoblox team today for lunch and a meeting.  My first question, and Kevin Dralle’s, whom I work with, was about the apparent incompatibility of InfoBlox and Citrix, especially with PVS.  Please comment if you think this doesn’t work or has issues.

 

Some of the issue have been described elsewhere (I know Jarian Gibson has wrote and tweeted a few things on this also)

http://discussions.citrix.com/topic/307967-dhcp-issues-with-pxe-boot-and-win7-os-streamed/

http://discussions.citrix.com/topic/301193-provisioned-desktops-with-infoblox/

With Infoblox there is a CTX but it’s a bit mysterious on details

http://support.citrix.com/article/CTX200036

So what is going on with InfoBlox, anytime we have had customer with InfoBlox on and Citrix we cringe or opt for perhaps a dual NIC, isolated PVS VLAN (using Microsoft DHCP).  In any case here is what happens.

InfoBlox assigns the device a UID based on MAC but also on some of the device characteristics. So when we boot off PVS, we bring up the bin file which acts as the OS at the time of PXE boot.  We have a static MAC but since after the bin file pulls the TFTP image then brings up a windows OS, the UID changes, which infoblox assumes you’ll need another IP address.  Obviously there are use cases for this but for PVS this is an issue as you’ll get two IP addresses.

One fix has been to use reservations but this defeats the whole purpose of using an appliance or solution to manage this all.  Furthermore, when or if you get into automation and orchestration, you’ve got one more component to worry about when increasing the scope.

You do need to be on the 6.6 or higher release for this option but it is worth it if you have this issue or are an InfoBlox shop and want to rollout PVS without trying something uncommon to deployments (using BDM has a lot less collateral out there than PXE boot does).

Below are the two areas from the Grid and Member layers where you can set this (courtesy of InfoBlox!).

ImageImage

Categories: Citrix Tags: , , ,

Citrix SynergyTV – SYN119 – How Atlanta Public Schools delivers virtual desktops to 50,000 students #citrixsynergy

Categories: Citrix, XenDesktop Tags:

Redirecting Folders to Office 365

May 11, 2014 14 comments

I created a script a while ago to map folders at logon to enable saving directly to OneDrive for Business (also known as SharePoint Online or SkyDrive Pro). I plan to explain more on this but it was the script I mentioned in my #SYN119 presentation at Citrix Synergy 2014.

This script is now up and published on codeplex and is under the GPLv3 license (a copyleft license). Feel free to use and modify if you can help. Below is a slight description of what the script does and why it was created.

https://office365drivemap.codeplex.com/

This project is to enable the use of Office 365 as redirected folders in Microsoft. Specifically, this script and method can be used on a Windows 7 desktop (or higher) with Citrix and roaming profiles (or any persistent profile method). What makes this unique is that no local storage is used (unless you can’t connect to office 365 and then it’s only temporary).

This script was developed by Tyler Bithell and Tom Gamull for a customer implementation. The customer desired a method to eliminate the use of local or shared storage and leverage their Microsoft Office 365 subscription.

You must have a subscription that gives you the SharePoint Online (Groove, SkyDrive Pro or OneDrive for Business). This is NOT the same as SkyDrive or OneDrive (these are just an online storage method, like Dropbox).

We leverage a method of WebDAV drive mapping utilizing NET USE. Although you can utilize OneDrive for Business for Microsoft Office applications directly, non-integrated applications are not able to be used except through navigation to the folder. This is often a problem for task works, students and others used to using My Documents or Downloads. Therefore this script was created to deal with this issue.

The script was announced but not shown at the Citrix Synergy 2014 conference in #SYN119. This script will also be discussed in the Cisco Live session Tom Gamull is presenting on Atlanta Public Schools.

XenDesktop 7.1 SQL Mirroring

March 6, 2014 1 comment

Mirroring in SQL is a great way to protect your XenDesktop infrastructure. In 7.1 deployments this can be a bit challenging since the documentation at Citrix doesn’t reflect an accurate way to accomplish this goal.

First let’s go over some basics. SQL Mirroring is a 3 server setup with a primary SQL server running a database, another secondary SQL server also running the database and a third SQL witness server which does NOT run the database (runs SQL, just no data). If you use local disk, this is an excellent setup. If you have two storage appliances, this is a great setup. If you have one big SAN, this doesn’t make much sense. To make mirroring worthwhile, you need 3 SEPERATE storage locations for each server. If you have two servers on the same storage, mirroring will not provide much value (other than a learning opportunity). I feel this is where people can easily forget why you mirror.

To demonstrate why, let’s say I have a two node management cluster, one host runs my primary SQL server and the other runs my secondary and the witness. I put the primary on the local disk of HOST1 and the secondary and witness (which is lightweight) on the local disk of HOST2. I have an issue. Let’s say HOST1 goes down, HOST2 is up and SQL stays up just fine because we have one of the mirrored servers running PLUS the witness. Let’s say I accidentally shut down the witness. No problem. Let’s say I shutdown HOST2 or do maintenance. Now I’ve got a problem. When the primary SQL server can’t see the witness AND the secondary SQL server, it stops. This is by design, it doesn’t know if it’s orphaned. It assume those two other servers don’t see it but must be serving the data. If I simply use a witness on a third unique HOST and storage area, my mirror is looking great! If I only have 2 hosts or shared storage, this is where a cluster makes sense. Clusters cannot survive storage failures, but mirrors can. However, you are writing to both storage locations at the same time. Often I’ll use two unique SANs and put the witness on local disk. I can now survive a storage appliance failure, however it’s only as good as having three different points of failure instead of one.

SQL Mirror

With that said, my main topic was how to get this done on your XenDesktop 7.1 controllers. This appears to have been posted other places but since I had to do it I thought I’d share also. There is an excellent post at Citrix on this here. here.

I did this manually below but I heavily recommend downloading his script and giving a shot before manually doing this.

I want to add one caveat, you MUST create the machine account logins in SQL on the mirror. So you’ll need to do this after a forced failover to the mirror. In addition, you may also need to delete the machine accounts and add them back if you migrate the SQL database, say from SQL Express to Standard/Enterprise for example. This is what worked for me, hopefully it helps

Now for this next part, I like to do this one controller at a time. If you mess up you can’t really go back and fix things if your controllers get orphaned. This is why during upgrades, you only partially update the farm, in case you need to roll back.


$cs = "Server=YOUR_SQL_SERVER_NAME;Initial Catalog=YOU_SQL_DATABASE_NAME;Integrated Security=True"


Set-LogSite -State Disabled
Set-LogDBConnection -DataStore Logging -DBConnection $null
Set-MonitorDBConnection -DataStore Monitor -DBConnection $null


Set-MonitorDBConnection -DBConnection $null
Set-AcctDBConnection -DBConnection $null
Set-ProvDBConnection -DBConnection $null
Set-BrokerDBConnection -DBConnection $null
Set-EnvTestDBConnection -DBConnection $null
Set-SfDBConnection -DBConnection $null
Set-HypDBConnection -DBConnection $null
Set-ConfigDBConnection -DBConnection $null -force
Set-LogDBConnection -DBConnection $null -force
Set-AdminDBConnection -DBConnection $null -force

Another way to clear the controllers out


$controllers = Get-BrokerController | %{$_.DNSName}


foreach ($controller in $controllers)
{
Write-Host "Disconnect controller $controller ..."


Set-ConfigDBConnection -DBConnection $null -AdminAddress $Controller
Set-AcctDBConnection -DBConnection $null -AdminAddress $Controller
Set-HypDBConnection -DBConnection $null -AdminAddress $Controller
Set-ProvDBConnection -DBConnection $null -AdminAddress $Controller
Set-BrokerDBConnection -DBConnection $null -AdminAddress $Controller
Set-EnvTestDBConnection -DBConnection $null -AdminAddress $Controller
Set-SfDBConnection -DBConnection $null -AdminAddress $Controller
Set-MonitorDBConnection -Datastore Monitor -DBConnection $null -AdminAddress $Controller
reset-MonitorDataStore -DataStore Monitor
Set-MonitorDBConnection -DBConnection $null -AdminAddress $Controller
Set-LogDBConnection -DataStore Logging -DBConnection $null -AdminAddress $Controller
reset-LogDataStore -DataStore Logging
Set-LogDBConnection -DBConnection $null -AdminAddress $Controller
Set-AdminDBConnection -DBConnection $null -AdminAddress $Controller
}

If the last two won’t work, try adding -force on the end. If they still don’t do the following (may need to reboot)
Get-Service Citrix* | Stop-Service -Force
Get-Service Citrix* | Start-Service

Ok now it’s time to go mirror, once you’re done setting up the mirror set the database on ONE of the servers only and verify it before moving to the next one(s)

You already set the $cs variable but if you opened a new window or lost it, set it again


$cs = "Server=YOUR_SQL_SERVER_NAME;Initial Catalog=YOU_SQL_DATABASE_NAME;Integrated Security=True"
set-ConfigDBconnection -dbconnection $cs
set-AdminDBconnection -dbconnection $cs
set-LogDBconnection -dbconnection $cs
set-AcctDBconnection -dbconnection $cs
set-BrokerDBconnection -dbconnection $cs
set-EnvTestDBconnection -dbconnection $cs
set-HypDBconnection -dbconnection $cs
set-MonitorDBconnection -dbconnection $cs
set-ProvDBconnection -dbconnection $cs
set-SfDBconnection -dbconnection $cs
Set-LogDbConnection -DataStore logging -DbConnection $cs
Set-MonitorDbConnection -DataStore monitor -DbConnection $cs


Set-LogSite -State Enabled


$testString = Get-BrokerDBConnection
Test-BrokerDBConnection $testString | fl

Now make sure you TEST FAILOVER before declaring success.

Categories: Citrix, SQL, XenDesktop Tags: , ,
Follow

Get every new post delivered to your Inbox.

Join 779 other followers