Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

In web design, managing user attention is incredibly important. Users typically have very short attention spans, and aren’t interested in reading large amounts of text. So when creating text for your website, it’s important to both maintain a visual hierarchy and utilize text effectively. By using heavy weight or large text, the user’s attention can be drawn to the most important aspects of the site, and by using progressively smaller text, it establishes a visual hierarchy of importance for the eye to follow. The most common place for big text to appear is in the header, because the header is also the first thing the user sees. In this post, we’ll look at some examples of big header text in web design: Evolve Pollen Surprisingly Richard Hill Black History Visionare Elespacio The Cut Mextures Festival Intercult Related Posts Powerful Examples of Volcano Photography Fun Text Effect Tutorials You Have to Try Photoshop Typography and Text Effect Tutorials Useful Infographics Templates to Download for Free Awesome Typography Graphic Designs
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 11:02am</span>
Hey guys, todays set of textures is vivid, green and fresh. I shot these various plants with my 100mm macro lens and the color and depth is great. Hopefully you can find good use for them. Enjoy! Download all textures as ZIP from copy.com (58Mb) Did you like these textures? Let us know by leaving a comment, and you can even post a link if you used them in your artwork. Related Posts Free Texture Friday - Raw Red Meat Free Texture Friday - Seaweed Free Texture Friday - Wooden Chips Free Texture Friday - Dark Grunge Free Texture Friday - Vintage Paper 8
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 11:02am</span>
Plastic is one of the most important materials on the planet, as it’s widely used in everything from cutting edge technology to 0.99 toys at your local toy store. Because plastic is not broken down by water, it is far superior for any application where the object may come into contact with moisture, and plastic is also one of the cheapest materials to make. The wide variety of uses for plastic means there’s tons of different objects out there to choose from when looking at getting creative, and they can be molded, shaped or placed into a variety of forms, making it great for artists and photographers especially. Photographers especially love being able to use plastic objects as props and for other items to include in photographic scenes. Here’s some creative and fun examples: Follow Me By wilianto Colourful Plastic By Ray Huntley Panic in the garden! By Luis Bonito A Picture Speaks a Thousand Words By Richard Bland Drinking Straws By Francesco Martini Men at work ! By ARCHIE MCKINNON Play Ground By Kent Mathiesen Plastic dragons By Robert D. Kusztos Toys of childhood By Sam Dobson Plastic Parade By Sonja Schick Hangers By Mustafa Tiryakioglu Despicable Me By Mustafa Celebi Straw land By Dreamscape "Hey you! make me laugh" By Julien Taillez listen all y’all, it’s a sabotage! By ionut jay Color of life By Polina Rabtseva 47 By Julien Taillez Lara By Julien Taillez Abandoned chair By James Billings Plastic curve By Jérôme Le Dorze Inside By Heidi Westum Watch out, it’s gonna….. By Dave Flynn Cubetas By Carlos Duran Soccer Game By Jesus Camacho Buttons By Jamie Folk hlava By Roman Holy Afterparty By Roman Semenov Lady in the crowd By Frank Raebiger dolls By ian mcintosh ..german behavior.. By Martina Epping Related Posts Free Texture Friday - Grunge Plastic Free Texture Friday - Scratched Plastic Creative and Innovative Designs for Bags Blinged Out Metallic Automotive Photographs Free Texture Friday - Subtle Scratches
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 11:01am</span>
Typography is a crucial part of design, but it’s also a type of design all by itself. Art can be created out of, or surrounding text, to create a typographical composition that expresses not only a word but also a concept or idea. Usually, typography graphic designs include the use of a single or word or short amount of text, created out of a font, colors, textures or objects in order to enhance or reinforce the idea behind the text. Here’s some creative and awesome typography graphic designs: Typography by Rachel-Speed We Dont Work, We Play by Max Kuwertz Steampunk Typography by Alex Beltechi I Feel It In My Bones by Zac Jacobson Steampunk Typography by aharmon ADICT by Chris LaBrooy Ball Till You Fall by Chris Seabrooks I Don’t Care I Hate It by Chris Piascik The Best Is Yet To Come by Miguel Harry MyPlanet by Amit Jakhu Related Posts Impactful Websites with Big Typography Creative Typography Poster Designs Illustrator Tutorials for Text Effects Fun Text Effect Tutorials You Have to Try Awesome Free Fonts You Should Download
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 11:00am</span>
We all love the dark side of textures don’t we? Today I’m sharing a set of five large dark grunge stone textures that I shot earlier this summer. You can use them as you want, personally and commercially. As before, you can download the whole set from the copy.com link below. Enjoy! Download all textures as ZIP from copy.com (52Mb) Did you like these textures? Let us know by leaving a comment, and you can even post a link if you used them in your artwork. Related Posts Free Texture Friday - Dark Red Grunge Free Texture Friday - Rusted Door Free Texture Friday - Grunge Wall 3 Free Texture Friday - Greenery Free Texture Friday - Grunge Wall 2
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:59am</span>
The industrial revolution is largely considered one of the most important events in human history, and directly responsible for many aspects of modern life. Simply put, the industrial revolution was based around the development of mechanical manufacturing, mass production, interchangeable parts and efficient creation of goods on large scales using raw materials. Industry and manufacturing are some of the largest business sectors around the world, with factories, warehouses, production lines and manufacturing centers being hubs of economic growth and activity, and frequently resulting in the development of urban centers surrounding them. Industry comes with some unique features though, such as pollution, dingy/dirty environments, noise and other grunge elements, that, while frequently unpleasant for the local environment, make for great photography subjects: melt shop (steel works) By Robert Industrial Region By kenji kikuchi steel works By Robert Industrial Nightfall 2 Color By Harald Mario Kocher centrale thermique By Sven Fennema Frevar by Night By Jan-Vidar Bakker Bremerhaven By Victor Knötzel casting steel By Robert Industrial Light By Niels Christian Wulff Industrial Nightfall 1 By Harald Mario Kocher Men on fire By Browni Aromatic Hydrocarbon By Andreas-Joachim Lins Tin Traditional industry By Haitham Elmerghani The Furnace By Beno Saradzic Old industries…. By Andrea Cavaliere Old Industries By JAVIER ESTRAVIZ photography BASF 2 By Markus Stollenwerk They are coming for us… By Ulf Härstedt Industries By Diyon Perez Untitled By Katya Gonova Industrial By Dennis Einecker Industrial By Gerry Langer Industrial By Jonathan Thomas industrial By Jens Alemann Industrial By Hugo Nidáguila tapping By Viktor Macha Go Away By Henrique Zorzan Greenhouse Effect By Judy W Gas refinery plant By U. Midtgaard Industrial Heaven By Marc Duiker Related Posts Blinged Out Metallic Automotive Photographs Chaos and Disorder in Photography Windows: More than Just an Opening in Photography Abstract Art in the Simple Leaf Deliciously Creative Food Logo Designs
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:59am</span>
Science fiction captures the imagination and minds of millions around the world, because it shows us things that may not be possible now, but might be in the future, or it shows us an alternate view of the world, or creates a sensationalized technological world around us. Science fiction usually combines elements of modern society, futuristic technology, and sometimes stretches the bounds of reality (hence the "fiction" part). It’s incredibly popular as a genre in art, movies, books and design. Science fiction elements work great in design, both for actual science fiction design, but also for things like logo design for cutting-edge technology companies or out-of-this-world themed parties promoted using printed flyers. Here’s some of the best sci-fi fonts you can find on the web: Space Neon Font Tron Font Matrix Font Star Wars Font Spaceship Font Blade Runner Font Xolonium Font Digital Tech Font Transformers Font Rixon Font Related Posts Awesome Free Fonts You Should Download 10 Great Free Fonts from 2011 10 High Quality Free Script Fonts Adobe Photoshop On the Web Awesome Typography Graphic Designs
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:59am</span>
Todays textures are raw and fleshy with a narrow range of focus. I shot these delicious beef steaks with my 100mm macro lense, and the result is a landscape of dark read meat. I’m pretty sure they are perfect for some projects out there. Download the set below or click the individual picture. Enjoy! Download all textures as ZIP from copy.com (13,4Mb) Did you like these textures? Let us know by leaving a comment, and you can even post a link if you used them in your artwork. Related Posts Free Texture Friday - Gritty Wall Free Texture Friday - Greenery Free Texture Friday - Seaweed Free Texture Friday - Old Stone Wall Free Texture Friday - Dark Grunge
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:58am</span>
The human mind naturally appreciates order, which is essentially a sensible arrangement of objects, items, concepts or other subjects. Commonly, lists are ordered from low to high (or vice versa), images can be ordered from darkest to lightest, or objects from smallest to largest. The brain views order as a way of making sense of surroundings, and it’s also associated with other concepts such as cleanliness and detail. However, excessive desire for order is frequently categorized as obsessive. Order and organization are fun concepts to play around with, and they can range from appealing and beautiful to almost unnerving in terms of how orderly they are, such as in these pictures: Tree and shadow By Klaus Leidorf Order and Chaos By Patricia Sweeney Intruders By Carlo Cafferini kinetic spokes By Patricia Sweeney X By Carlo Cafferini Melbourne City Bikes By Son G @ the Market By Hinanit Kazir The Organized Randoms By Sam Azmy A Family of Russian Dolls By Kieron Doherty Fishing time By Dora Apostolova Stacked Conduits By Atilla BAYRAM "Order of Glass" | "Ordine di Vetro" By MAURIZIO PONTINI Arrangement By Hessam M. An0maly By Ofer Perl Order By Frozen Angilo Mohammed Spring Lines By Dirk Wüstenhagen Presents By Danilo Ascione Row of shadows By Klaus Leidorf frost……….. By Ulrike Morlock-Fien nature’s song By Codrin Lupei DE BRUGES A DAMME, le canal… By Magda Indigo THE CANALS of FLANDERS, AROUND DAMME… By Magda Indigo Jay Pritzker Pavilion By Roy Yang Some Trees and Some Snow By Daniel Weber O O o o O O By ionut jay Order By Phuc Doan ANOTHER UNIQUE VIEW… By Magda Indigo Structure By Jamie Condon Untitled By 660j Pattern By Andor Auber Related Posts Chaos and Disorder in Photography The Art of Flour Windows: More than Just an Opening in Photography 30 Extraordinary Superhero Photographs Abstract Art in the Simple Leaf
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:58am</span>
Lightroom is one of the most popular image editing programs out there, and with good reason: it has a large suite of tools for enhancing, editing and altering photographs. Lightroom is non-destructive, so it doesn’t erase the original image. One of Lightroom’s great features is the ability to use presets, which are basically a set of alterations that can be instantly applied to an image. Presets can edit colors, contrast, sharpness and more aspects of an image, and they can be altered after application. Here’s some of the most useful Lightroom presets to enhance your photos: Enlighten Urban Elegant Fade HDR Sharpening Desaturate Me Vintage Faded Film Pinhole Warming Related Posts PhotoPresets for Lightroom Free Texture Friday - Old Rusted Fence Free Texture Friday - Old Plywood Free and Useful Photoshop Actions The Best Free Minimal Tumblr Themes
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:58am</span>
Last week, we looked at how order and organization works well in photography and creates a naturally appealing visual for the eye to look at. However, not everything in the world is organized and orderly, in fact, many things behave chaotically and are wild, unpredictable and disordered. These concepts can also play well in photography, and in a strange way, almost create a sense of order through their chaotic nature. Seemingly random or disordered items can develop their own organization and patterns, but pure chaos also has a naturally appealing style in photography. Here’s some examples of chaos and disorder in photographs: Chaos By ido meirovich chaos in the water By Dimitar Chungovski chaos By Robert Order and Chaos By Patricia Sweeney Chaos 4 Sale By Kevin Doolan Chaos By Sreekumar Mahadevan Pillai Oystercatchers By Andrej Chudy Birds of Chaos By Ahmet Ünal Conscious chaos. By Sachin Gangadharan Macau By Cesar Nascimento Flying Free By Patrick Davis Hamburg Train Station By Mon Cano Chaos By Himanshu Sharma By Axel Hahn Chaos By MJU Photoworks Chaos from the top By Amith Nag chaos By baris can Streets of Kathmandu By markhuls1965 Water Chaos By Lúcio Dias Chaos By Máximo Panés Winter Chaos. By Mikko Raima The Silence of the Lambs By Istvan Kadar Birds… from Hitchcock By Fátima Silveira Lost By Guy Cohen Photography Cold Morning Flight By Doug Roane TAKING FLIGHT By Lee Fisher Effoliation By Nadav Dov Boretzki Birds By Tony N. Looking for a place… By Ronen Rosenblatt Sparkles By Chris Related Posts Perfectly Orderly and Organized Photographs Refreshing Photos Involving Water Windows: More than Just an Opening in Photography The Art of Flour Abstract Art in the Simple Leaf
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:58am</span>
Pink is a light reddish color that is often associated with femininity, love, beauty and romance, and the name is derived from flowers called pinks and their frilled edges. Pink is most often used in design for feminine products, flowers, hygiene products and other related products, and it can be a difficult color to design around if not used properly. It most often combines well with purple, white, or black. In this post we’ll look at some pink web designs that will inspire you. Herstory L’Oreal Makeup Genius Bolds Bloom Visualization symodd Mr. Sketch The Christmas Endorser Miceli Studios Epic Exit Related Posts Pretty in Pink Photographs Free Texture Friday - Pink Colored Paper Flat Agency Web Designs to Inspire You Fashion and Beauty Web Designs Inspiring Retro Web Designs
Stockvault Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 10:44am</span>
If you’re new to eLearning, then understanding and following instructional design best practices from the beginning is crucial to your success. The eLearning niche is vast, and you will find numerous theories, models, and resources that have worked for different experts. Leave them for later. Begin with the basic, most widely used models that eLearning designers acknowledge and use to structure and plan their training: ADDIE Model Merrill’s Principles of Instruction Gagne’s Nine Events of Instructions Bloom’s Taxonomy Note: This overview doesn't intend to evaluate the models. Each framework has its own advantages and disadvantages, and the choice of which to use will depend on which model works best for you, your company, and your learners. 
Shift Disruptive Learning   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:57am</span>
Tim and I have been discussing how our own experience in designing the SCORM Engine might directly apply to the "platform for e-learning" being discussed by LETSI. Behind the scenes, the SCORM Engine is essentially a platform that facilitates adding plug-in functionality to LMS’s. In this case, the plug-ins are currently all learning standards. For instance, the available plug-ins allow an LMS to support, AICC, SCORM 1.1, SCORM 1.2, SCORM 2004 2nd Edition, SCORM 2004 3rd Edition and IMS Sharable State Persistence. We have high level designs to extend the SCORM Engine to support other plug-ins like a discussion boards or an assessment engine. An added benefit of the SCORM Engine’s architecture is its integration layer that allows it to tie into any LMS. The diagram below shows how the SCORM Engine’s architecture allows for it to be integrated with any LMS and serve as a platform for supporting many content delivery mechanisms. Should LETSI move towards a path similar to the one described here we would look forward to contributing our experience and lessons learned from the design of the SCORM Engine.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:57am</span>
This is a festive time of year. Whether you celebrate Christmas, Hanukkah, Kwanzaa, the New Year, or if you’re just excited that it might snow soon, we wish you and your loved ones all the best. The other day, I was talking with a few other entrepreneurs about how how we handle holiday gifts for employees. I’m proud of the answer I gave and figured I’d share it. Tim and I select personal, meaningful gifts for each of our employees. We try to give our employees something they will really enjoy and perhaps something they wouldn’t normally treat themselves to. It’s hard and it takes some work, but it’s worth it. So, what did we get this year: Brian, the rabid Tennessee Titans fan, will be taking his family to the sold out final home game against the Steelers. David, the disc golf pro, will get his own disc golf goal at the office for coffee break practice sessions. He’s also getting a book on toilet paper origami since he has a reputation for never replacing the roll. Joe, the Xbox fanatic, is getting the complete Guitar Hero world tour set. He will also get a new fleece pullover to replace the one he constantly wears to work that bears the logo of his former employer. John, the newly born do-it-yourselfer, will get a power tool shopping spree at Lowe’s. Eric and Troy, the two employees competing for the title of corporate brewmaster, will get kegging kits and kegerators for storing their home brewed beer. (Yes, this might be a subtle hint to share the spoils.) Kevin and Jean, our cultured crew, are getting season tickets to the Tennessee Performing Arts Center. Rustici Software is very much a lifestyle company. We strive to create an environment where we want to work and where others will want to work. That goal is often in conflict with growth. We have no ambition to be a large blockbuster company. We intentionally want to stay small. But how big is too big? I think the answer might be defined by our gift policy. I want to always know our employees well enough that we can give them a thoughtful and personal gift.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:56am</span>
ADL recently released beta versions of the SCORM 2004 4th Edition Conformance Test Suite and Sample Run Time Environment. 4th Edition adds 4 new features and 30-something clarifications/enhancements/bug fixes to SCORM 2004. This evolution is not a drastic change to the specification, but should represent a significant step forward in the compatibility and usability of SCORM 2004. Since most of the changes are simply clarifications, the implementation burden on SCORM adopters should be rather light. For content developers, only minimal changes (if any) will be required. Most content should be unaffected by the update to 4th Edition. LMS vendors (as always) will have a greater load to carry. For them, the amount of development work required will vary considerably based on the quality of their 3rd Edition implementations. The 4 new features should be rather straight-forward to implement, but the numerous clarifications will present varying levels of difficulty to different vendors. The new features include: -Rollup of weighted completion data. SCORM 2004 has always include a "progress measure" data model element that indicates "how complete" the user is on an individual SCO. This data will now be officially rolled up with different activities having different weights. This weighting and rollup will give an accurate picture of the user’s overall completion of a course and enable LMS’s to provide accurate progress bars. -Jump navigation request. Many sequenced courses want to provide the ability for SCOs to control navigation in a way that is different than what is available to the user. Previously, the navigation requests that a SCO was allowed to make were identical to what the learner was allowed to do. The new "jump" navigation request gives content authors more sequencing options and separates the requests that are available to internal calls from the requests that the learner is allowed to initiate. -Shared data between SCOs. SCORM 2004 4th Edition now allows SCOs to share arbitrary buckets of data. When creating a sequenced course, it is often very helpful to have a common pool of data that different SCOs can access to maintain a shared state. The lack of this functionality has always been a big obstacle to creating cohesive sequenced content. -More objective data available globally. All of the objective data that can be reported at runtime is now available to be shared with other SCOs and courses via global objectives. This will provide for simpler and more creative sequencing strategies. Part of our duty as members of the ADL Technical Working Group is to be early implementers of new specifications to help ADL verify their accuracy. We are already working to update our products for SCORM 2004 4th Edition. The SCORM Engine was first on the list and we’re making good progress. ADL added or changed 92 LMS test cases for 4th Edition. Of those, 23 deal with the new features that we are starting to implement. Of the other 69 dealing with clarifications and bug fixes, we currently pass all but 12 of them. Of those 12 remaining test cases, 6 have open questions of interpretation that we’re discussing with ADL and the TWG. The other 6 should be completed soon. Currently 4th Edition is in a beta period for review and public comment. Please let us and/or ADL if you have any feedback about the changes made for 4th Edition before the public comment period elapses. We intend to release a 4th Edtion complaint version of the SCORM Engine to the public SCORM Test Track instance shortly after it is completed. We will have production-ready and formally released versions of all our products that are compliant with 4th Edition very shortly after 4th Edition is finalized and out of beta.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:56am</span>
Our customers help us improve our software all the time. We regularly hear about some eccentric SCORM problem or Package Property that would help make the product better, and we strive to include that as quickly as we can. Concepts of listening and improvement, though, need to extend beyond the core products themselves. I’ve been working with a prospect on the SCORM Engine, working toward contractual agreement. In doing so, he discovered what he felt was a hole in our agreements. Specifically, he couldn’t find any route by which he could opt out of the agreement if we failed to hold up our end of the bargain. Incredulous, I told him I was sure it was there, but that I would check in with our legal staff at Waller Law. Wouldn’t you know it, the basic document we’ve been using for years doesn’t include that specific right. Well, frankly, if we’re not doing what we indicated we would, what right do we have to lock you into your deal? The answer, obviously, is none. So, all of our basic agreements have been changed and we’re moving forward with some new language. Specifically, Customer may terminate this Agreement if Licensor is in material breach of its obligations under this Agreement and such breach is not cured within thirty (30) days of receipt of written notice from Customer describing the breach in reasonable detail. If we’re wrong, well, we’re wrong. So, changes have been made to our "annotated legal documents" as well, section 7, if you care. [If anyone cares, the annotated legal documents have been great for us. Prospects seem to love them and I'm not answering the same questions all the time anymore!]
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:56am</span>
Check it out. We won a modeling and simulation award from NTSA. Forterra Systems entered our joint project to deliver SCORM training from within virtual words. They describe the project and award in more detail in this press release. It’s fun to be recognized just for doing interesting work. We’re excited to see what the future holds for the use of simulations in training. At I/ITSEC this year, I was struck by the preponderance of simulation companies. It seems to me that simulations are about to hit a tipping point at which the technology to develop useful simulations will become quite affordable. I imagine the impact on corporate training will be considerable. Good bye page-turners, hello interesting, engaging and effective training.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:56am</span>
No, we do not. We get asked about it quite a bit though. In our early years, we experimented with some formal lead referral arrangements, but they just never quite worked out nor did they ever feel right. We’ve given it some thought and concluded that providing financial rewards to incent others to send business our way is rather antithetical to who we are and how we work. Referrals are a huge source of business for us. Over the years, we’ve developed a great reputation for providing excellent solutions, being highly competent and taking exceptional care of our customers. We built this reputation the old fashioned way, by actually living up to our promises, striving to exceed expectations and always maintaining the utmost integrity. When you do these things, it is easy for people to refer business your way. People refer us because they have had a good experience with us in the past or because they know that our solutions will solve a customer’s problems. In other words, our referrals are authentic and meaningful. Referrals that have a financial reward associated with them loose that authenticity. That’s not good for our reputation, it’s not good for the reputation of the referrer and it certainly doesn’t do the client much good. Sending the message that we need to purchase referrals lessens the authenticity of our reputation. At the end of the day, I think we’re better off maintaining the high ground. There are a whole host of other problems with paid referrals. For instance, if several people refer a client, who gets credit? Do we financially reward anybody who sends a referral, or just those who formally signed up? For those that didn’t sign up, does paying them cheapen our relationship? Does not paying them make them feel slighted? I could go on and on. We can’t express our appreciation enough to all of you out there who spread the word about Rustici Software day in and day out. You have our eternal gratitude and heartfelt thanks. While we do avoid financial referral incentives, we aren’t freeloaders, we would love to be able to return the favor. If there is ever anything we can do to help you out, please let us know.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:55am</span>
As I mentioned a few weeks ago, we’re using "between implementation" time (Brian’s) to do some more work toward an elegant, pre-packaged integration between the SCORM Engine and Moodle. While Moodle is moving toward SCORM 1.2 certification, progress toward SCORM 2004 in Moodle is very limited. For certain clients (much of the government included), SCORM 2004 support is crucial. In discussions with one such client, I mentioned our pending integration and offered to share some images of it. Keep in mind, this is just an "alpha" version, but we’re getting pretty excited about it. The next steps will certainly include making it a bit more "Moodly", but we’re off to a good start. As you likely know, SCORM LMS’s are obligated to do a couple of things well… import and delivery foremost among them. In the SCORM Engine, we try to make import as simple as humanly possible. Any SCORM course should be available as a zip file containing a manifest (AKA a PIF). So, step one to taking SCORM training in Moodle is as simple as selecting a SCORM package and hitting upload. One of the first places the SCORM Engine differentiates itself from alternative SCORM players is its ability to handle content that is technically non-conformant. Doing so requires that the SCORM Engine provide intelligent feedback during the import process. This sample course from ADL is conformant, so there are no parser warnings, but feedback to the administrator is still important. Step two in achieving high levels of compatibility is delivering a course in the manner that best suits it. A big part of how we do this is through our Package Properties control. This allows us deliver the course in different window structures, with different navigation parameters, or even with compatibility settings that accommodate common mistakes from content vendors. These options are a big part of why content generally works better in the SCORM Engine than anywhere else. Importing content in not a process isolated to the SCORM Engine. In parsing the manifest, it’s important to inform the host LMS about the course and its information. Even in this alpha form, we’re able to interact with Moodle to inform it about the creation of the course. Now that the course has been properly created in both the SCORM Engine and the host LMS (Moodle), it’s time to launch it. Delivery in the Moodle integration is just like delivery in any SCORM Engine implementation. It is as simple and direct as we can make it. Everything in black and blue below is completely skinnable, and the compatibility that comes from our Javascript architecture surpasses that of any other SCORM provider. This is particularly evident when running SCORM 2004 courses that contain sequencing and navigation (again, this is a fundamental problem for Moodle implementations today.) As always, proper integration with a host LMS (Moodle, in this case) requires informing that host about the progress of the learner. As seen here, scores from the content are properly rolled up and reported to Moodle (Brian’s not exactly proficient with Photoshop). While work remains to be done to make the integration a bit more Moodly and hands off, we’re really pleased with just how well these products fit together and how complete a solution this is. At this point, we’re looking to licensing through Moodle hosts and the like rather than directly licensing under GPL or something similar. That time may come as well. Please be in touch if you have questions/comments/interest. Feel free to comment here or email us: info@scorm.com.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:55am</span>
From my recent post on the LETSI blog: They say that sometimes the best way to learn how to swim is just to dive into the water and see what happens (please don’t actually try this at home). Often, you can analyze a problem and get nowhere, but once you just jump in and start tinkering, the solution becomes apparent. That is the approach we’re taking in the LETSI Architecture working group right now…we are about to jump into the water. The LETSI Architecture Working Group has decided to take on a small project to see how well we can swim. We picked a relatively small and simple problem, but one that will provide a lot of value once solved. If approved, the Architecture Working Group’s first project will be to define a web services interface for the SCORM run-time API….The real goal here is not the the technical solution…The real goal is consensus on the project plan and intended deliverables. This project plan is intended to elicit comment. Join in the discussion over there, it promises to be lively.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:55am</span>
We’ve recently completed development of a hosted version of our SCORM Engine. In the coming weeks we will be transitioning TestTrack over to using the hosted Engine to enable much greater scalability than the single server install can currently provide. This project will involved liberal use of several of the Amazon Web Services, due largely to their ease of use, low cost and high scalability. Using the Elastic Compute Cloud (EC2) was an obvious choice for us, as we can easily create and destroy machines as our load fluctuates. Less obvious, however, was how we should go about storing content files. These were our requirements for a storage device: 1) easy ability to upload large files using standard FTP clients, and possibly other commonly available protocols. Files over 1 GB aren’t uncommon, and larger files than that are certainly possible. 2) real-time file updates (i.e. when I upload a new version of a file and then immediately request it, there should be no chance that I get back an older version) 3) small amount of storage for now (so it’s cheap) with the ability to grow to a few TB or more as demand requires it 4) storage should all be accessible at a single root location. We currently have files spread over two drives, which requires a small bit of one-off code to determine which files from which users go on which drive. 5) ability to access the content directly from any one of (potentially) several web servers 6) some form of backup and/or redundancy to prevent data loss Amazon currently provides two different mechanisms for persistent storage: Simple Storage Service (S3) and Elastic Block Store (EBS). Each of these storage methods has its advantages, but at first look, neither will fulfill all of our requirements straight out of the box. S3 provides limitless storage in "buckets" of up to 5 GB each, while only charging for the amount that you’re actually using (#3). It provides access to your files via HTTP from anywhere in the world (#5), while promising 99.99% availability because of its decentralized, redundant, fault-tolerant architecture (#6). However, it doesn’t directly support FTP (#1), and file uploads don’t necessarily propagate to other nodes instantaneously (#2). We’d also have to span multiple buckets, meaning that we’d have to track which customers were stored in each one (#4). We could potentially overcome the FTP issue by writing an FTP client that uses S3 as its back end, but that’s time-consuming and inelegant, and it makes the cost of switching to another protocol (like SFTP or SCP) extremely high. The other problems are inherent to the S3 architecture, so we’d just have to deal with those. EBS behaves just like a traditional block storage system. You can think of an EBS volume as a virtual external hard drive attached to your virtual machine. File access is only limited by the security settings on your machine (#1), and files written to the device are immediately available (#2), just as they would be on a "regular" hard drive. Drives can range in size from 1GB to 1TB, and you can mount several EBS drives to one machine (up to 20, I believe), thus providing enough storage to meet our needs for the foreseeable future on a single server (#3). EBS volumes are only available in one Amazon Availability Zone, potentially making them less reliable than S3 storage. However, you can create snapshots of drives and store them in S3, whereafter you can restore those snapshots to any EBS volume in any availability zone. Since EBS volumes behave just like hard drives, you can also mirror them or take any number of other traditional steps to protect your data (#6). The downside to EBS is that you have to pay for all space that’s allocated to you, whether or not you’re actually using it (#3). Additionally, multiple drives mean multiple mount points (#4), and as with regular drives, they can only be attached to one machine at a time (#5). Ultimately, we decided to go with EBS because all of its shortcomings can be overcome with widely available, common solutions. We can start with small volumes to keep the cost down, and then grow them whenever we need to by taking a snapshot and then restoring it to a new, larger volume (#3). We can overcome having to deal with multiple mount points by using LVM to join multiple volumes into one big one (#4), and we can overcome the single machine limitation by exposing the whole thing with NFS. With all of the decisions out of the way, it’s now time to actually combine all of these things together. I found a good bit of info about each of these pieces in various places on the web, but it doesn’t appear that anyone has set out to put all of this in one stack (or if they have, they didn’t write about it). As such, I thought I’d take some extra time to record the things I’ve done to make all of these pieces work together. LVM Config Before we can begin setting up our storage solution, we first need a machine in the cloud to host the drives. The easiest way to start one up is to use the ElasticFox plugin for Firefox. If you’re not familiar with ElasticFox, go take a minute to play around with it and see how it works. We’ll be using it for quite a few different things throughout this document, so you’ll need to become familiar with it. Open up ElasticFox and fire up a VM. At some point we may start using home-baked machine images so that everything we need is already installed, but right now we’re using a public Ubuntu 8.04 image from Alestic (ami-51709438). Since we’ll be installing most of the required packages as we go, these steps should work on other distributions as well, but I haven’t tried it. Once your VM is up and running, take note of which Application Zone it’s in. Now click over to the Volumes and Snapshots tab in ElasticFox and create two new volumes, making sure they’re in the same Application Zone as your VM. For the purposes of this demo, I made my drives 1 GB each. Since this demo is starting off as a proof-of-concept, there’s really no use in paying for more storage until we’re actually going to use it. We can always come back and grow (or replace) those volumes later. If you’re both good at math and observant, you’ve probably noticed that there’s not really any need for creating two separate elastic block drives that are only 1 GB each. Why not just create a single 2 GB drive? In practice, the single drive is probably the way to go. But one of the points of this exercise is to prove that we can combine two of these things together and make them look like one drive. If you’d rather just take my word for it, you can always go through these steps with just one drive, but if you’re looking for a (practically) infinitely growable, single-volume storage drive, you’ll still need to get lvm set up. Once your drives are created, use ElasticFox to attach them to your VM. On the Ubuntu image that I’m using, partitions already exist as /dev/sda1, /dev/sda2, and /dev/sda3, so I attached my two drives at /dev/sdb and /dev/sdc to avoid any confusion. It’s probably a good idea to record which drive is attached as which device for future reference. I have tested reconnecting the drives as different devices without any problems, but my testing wasn’t thorough and I can’t guarantee that it’ll always work. If nothing else, it seems like a really bad idea to go switching them around, so I’d recommend coming up with some way to make sure they’re always connected as the same device. Now SSH to your VM and connect as root. ElasticFox allows you to do this easily by right-clicking on a running VM and selecting "Connect to Public DNS Name." The stock image that I’m using doesn’t come with all of the LVM and NFS packages that we’ll need, so before we can begin configuring our drives, there are a few things that need to be installed. Let’s update the apt-get cache and then install everything that we’ll need for LVM. &gt; apt-get update &gt; apt-get install lvm2 dmsetup dmapi dmraid Setting up LVM involves several steps, but they all make sense if you take a step back and look at an overview of what they’re actually doing. LVM allows you to take a group of physical drives and combine them into one giant virtual drive. You can then partition and/or use that drive in any way you see fit. This allows you to create partitions that are actually larger than any of the physical drives you’re using, and it also gives you the ability to expand your drive in the future. Based on that explanation, the following steps should be pretty self-explanatory. First, create a physical volume on each drive. This allows them to be recognized by LVM. Most LVM tutorials that I’ve read say that you first need to partition the drive with an LVM partition, but that’s only the case if you plan to use parts of this drive for other purposes. In our case, we want LVM to use the entire drive, so we can just create the physical volume directly on it: &gt; pvcreate /dev/sdb /dev/sdc Next, create a volume group, which tells LVM which physical volumes should be grouped together. As you would expect, you can add drives to or remove drives from this group in the future. &gt; vgcreate elastic_drive /dev/sdb /dev/sdc Finally, create a logical volume (or multiple logical volumes) that can then be formatted and mounted just like a "regular" drive. Here, I’m specifying the size as 100% of the volume group, but you can also specify an absolute size if you’d rather. &gt; lvcreate -n content -l 100%VG elastic_drive Now that we have a usable drive created, we’re finally ready to put a filesystem on it. There are a number of options you can use, the most popular of which is probably ext3. ReiserFS and XFS are also pretty popular. After doing a little bit of research, we decided to go with XFS because you can resize it without unmounting it (ext3 can be resized, but only after it’s been unmounted). You can also freeze it at any time to allow for safe snapshots, but LVM already provides that functionality. Before we can create our filesystem, though, we need to install the necessary XFS packages: &gt; apt-get install xfsprogs To create our XFS filesystem: &gt; mkfs.xfs /dev/elastic_drive/content The last step in creating our super-large, expandable drive is to create a mount point for the new drive and then mount it. Right now, I’m mounting it at /var/content. &gt; mkdir /var/content &gt; mount /dev/elastic_drive/content /var/content To make sure that everything is working, let’s put a couple of files out on our giant drive. &gt; cp /etc/fstab /var/content &gt; cp /etc/rc.local /var/content Now let’s check to make sure they disappear and reappear when they’re supposed to: &gt; ls /var/content [should see your files listed] &gt; umount /var/content &gt; ls /var/content [should get no results] &gt; mount /dev/elastic_drive/content /var/content &gt; ls /var/content [files should be back] OK, that’s progress, but we’re not finished yet. The final step is to make sure our mount persists when we reboot. You _should_ just be able to add the following line to /etc/fstab: /dev/elastic_drive/content /var/content xfs defaults 0 0 … but that’s not working for me. For some reason, the logical volume isn’t coming up as active, so the mount fails when I reboot. If you’re having the same problem, here’s a little hack that’ll make it work. Just add the following two lines to your /etc/rc.local file: lvchange -ay /dev/elastic_drive/content mount /dev/elastic_drive/content /var/content I’d highly recommend rebooting your server now and making sure that your mount comes back up. It’s much better to discover any problems now before you’re relying on this shared volume in a production environment. NFS Config Now that our giant storage drive is configured, the next step is configure NFS to share it amongst all of our other machines. First, let’s load all of the NFS packages well need: &gt; apt-get install portmap nfs-kernel-server Next we need to add an entry in /etc/exports to expose the drive: /var/content *.compute-1.internal(rw,no_subtree_check,sync) A few things to note about the above line: 1) We’re technically exposing our drive with read/write access to anyone in our portion of the Amazon cloud. However, the security group that we’re in will prevent anyone from outside the group from accessing our machine on the NFS port. As long as that firewall holds, then this is totally secure. I’ve elected to open myself up to anyone in my security group because I don’t want to have to come back and edit this file every time we spin up another machine. If you would like an extra layer of security, you can specify specific machine names here instead. 2) For additional security, you can also add entries to the hosts.allow and hosts.deny files to further prevent unauthorized access. Again, this is redundantly securing something already taken care of by the security group, so it’s not strictly necessary (but it’s not a horrible idea, either). Now we just have to refresh which shares have been exported, since apt-get was nice enough to have already started the NFS server: &gt; exportfs -a Technically, we’re finished now, but let’s verify that our NFS share is actually working. Fire up another vm in the cloud to serve as our client machine, making sure that it’s in the same security group and application zone as the server. Note that if you didn’t set up your exports file to allow anyone in your security group to connect, you’ll have to go specifically add this new machine to your exports file on the server. Once the machine is up and running, we’ll need to install some NFS packages to allow it to run as a client: &gt; apt-get update &gt; apt-get install nfs-common Once the installation is complete, create a directory to serve as your mount point and mount the remote filesystem. I’m mounting mine at /var/content_server. &gt; mkdir /var/content_server &gt; mount nfs_server_name:/var/content /var/content_server Finally, test to make sure that your files are showing up: &gt; ls /var/content [should see files from your remote drive] One final note on Security. For the purposes of this document, I’ve made the assumptions that you both trust and don’t mind sharing your files with all machines in your security group. The alternative steps that I briefly discussed (using hosts.allow and hosts.deny) should further lock down your server, but the one thing I didn’t discuss is sharing your files with a machine outside your security group. Beyond the steps outlined here, you’ll need to add an entry to your security group to open up port 2049 (the default NFS port) to the IP address of your client machine (DNS names won’t work when configuring security groups). Server Restore Now we have our file server up and running, with a nice expandable drive for files that’s easily recoverable even if our host machine crashes or is terminated. That all sounds nice, but how do we know that any of that stuff actually works? We don’t… yet. So let’s find out. Let’s assume that everything is set up as it was at the end of the configuration document: you have a "server" VM running that has two 1GB EBS volumes attached to it. Those volumes are combined into one logical volume that is then mounted on the "server" and shared via NFS. You also have a "client" VM running that has the logical volume mounted via NFS. So what happens if the server restarts? Terminate your server VM. ElasticFox has a nice terminate button that makes this easy. Now fire up a new VM, reattach your elastic drives, and SSH to it. Since this is a brand new VM, all of the packages that we installed on the old one aren’t there. Let’s get those back: &gt; apt-get update &gt; apt-get install lvm2 dmsetup dmapi dmraid xfsprogs portmap nfs-kernel-server Now let’s look and see if our logical volume is still set up across the two elastic drives: &gt; lvs Sweet! It’s still there, so we don’t have to go through all of those configuration steps again. Unfortunately, though, it’s not active. Let’s fix that: &gt; lvchange -ay /dev/elastic_drive/content Now let’s mount it again: &gt; mkdir /var/content &gt; mount /dev/elastic_drive/content /var/content &gt; ls /var/content [should see the files you put on there before] So our drive is back up and running. Now we just need to make it come back after reboots by redoing our changes to /etc/fstab: /dev/elastic_drive/content /var/content xfs defaults 0 0 … or to /etc/rc.local if you had to use my little hack: lvchange -ay /dev/elastic_drive/content mount /dev/elastic_drive/content /var/content And finally, let’s share it back over NFS with an edit to /etc/exports: /var/content *.compute-1.internal(rw,no_subtree_check,sync) and refresh our exported filesystems: &gt; exportfs -a Now all that’s left is to update our client machine(s) with the new location of the content server. &gt; umount /var/content_server &gt; mount nfs_server_name:/var/content /var/content_server &gt; ls /var/content_server [should see the files from your elastic drives] There are a couple of remaining problems that I see with this setup, and the only way I know of to solve them is to write my own scripts. First, what happens on a client machine when it tries to write to the remotely mounted directory while the server is down? How do we make sure that no data is lost while we wait for the server to come back up? And second, how can we make the client machines aware when the server’s location changes? My thought is to write a script that monitors the server. When it detects that the server can’t be reached, it funnels writes into a temp directory until the server directory can be remounted. Volume Growth One of the most important features of our storage setup is that it’s easily expandable, so it’s probably a good idea to make sure we can actually expand it. There are several ways to do this, so let’s outline a few of them now. The easiest way to increase our storage is to just add another EBS volume, so let’s try that now. Go to the Volumes and Snapshots tab in ElasticFox and create a new volume that’s equal to the size you want to add. Attach the volume to your server. Since we’re currently using /dev/sdb and /dev/sdc, logic allows that we should probably connect this one at /dev/sdd. Now we go through steps that are similar to the initial setup, except that we’ll be growing existing entities rather than creating new ones. First create a new physical volume on your new device: &gt; pvcreate /dev/sdd Now add the physical volume to our existing volume group rather than creating a new group: &gt; vgextend elastic_drive /dev/sdd Next, extend the logical volume to consume 100% of the free space available (unless you’re planning on saving some of the space for some other purpose): &gt; lvresize -l 100%VG /dev/elastic_drive/content And lastly, expand the filesystem to fill the logical volume (note that the path is to the mounted volume, not to the device): &gt; xfs_growfs -d /var/content Now let’s check our work: df -h Our giant drive should show up as /dev/mapper/elastic_drive-content, with a total capacity equal to the sum of the capacity of the three individual drives. That was nice and easy, but it’ll only work for so long, as Amazon currently limits each customer to 20 volumes. Assuming that you haven’t made each of your drives the maximum size (currently 1 TB), you can utilize a lot more space before having to beg Amazon for special treatment. The easiest way to expand the capacity of your existing drives is to take advantage of Amazon’s snapshot feature. Amazon allows you to take a snapshot of an EBS drive at any time, automatically storing it to S3. The transfer cost to S3 is free, but you will be charged for the S3 storage space at the standard rate. Once you’ve created a snapshot, you can restore it to your drive at any time. However, you can also restore that snapshot to a new, larger drive, which is what we’re going to do here. The downside to this method is that it requires taking down your filesystem. That’s simply not an option for some people, but if you don’t mind some downtime, this is definitely the easiest way to expand a single EBS volume. First, unmount your filesystem. &gt; umount /var/content Now set the logical volume to inactive. This is more just a safeguard to make sure that nothing can mount or modify anything on the elastic drive while we’re expanding it. &gt; lvchange -an /dev/elastic_drive/content Now open up ElasticFox and go to the Volumes and Snapshots tab. Find the volume you wish to replace, make a snapshot of it, and then detach it. You can either delete it now or wait until later if you want to be extra-safe (your data is already backed up in the snapshot). Now create a new volume from the snapshot you just took, making sure to specify your new, larger size, and attach it back to your machine at the same point as the old one. (As stated earlier, it’s probably not 100% necessary to reattach at the same point, but I’m not willing to say that for sure.) Back in your ssh window, you should now be able to look for and find all of your physical volumes. &gt; pvs At this point, you can go ahead and set the logical volume back to active and remount the drive in order to minimize downtime: &gt; lvchange -ay /dev/elastic_drive/content &gt; mount /dev/elastic_drive/content /var/content Unfortunately, the physical volume on the new drive is still the same size. That’s because we have to explicitly tell it to grow into the new space: &gt; pvresize /dev/sdd Since this new drive was created from a snapshot of the old one, it’s already a member of the volume group, so we don’t have to make any changes there. However, we do still have to expand the logical volume and the filesystem to take up the rest of the space: &gt; lvresize -l 100%VG /dev/elastic_drive/content &gt; xfs_growfs -d /var/content That’s it! To check our work, we can run: &gt; df -h So that was pretty easy, but what do we do if it’s entirely unacceptable to take the filesystem down? The answer to that question is only slightly more complex, but it’s a good bit more time-intensive. Even though we’ve used LVM to configure our multiple physical drives as one logical one, LVM provides facilities to guarantee that a particular physical volume is no longer in use (and therefore safe for removal). In order to clear off a volume, however, we have to first have enough unallocated space available in the volume group to be able to hold all of the data from the physical volume that we wish to remove. The easiest way to accomplish that is to go ahead and create a new EBS drive and add it to the volume group, so go to ElasticFox and create and attach a new drive that’s equal to the amount of space you want to add PLUS the size of the drive you’re going to remove. For example, if you want to add 100 GB worth of space, but you’re going to remove a 50 GB drive in the process, your new drive needs to be 150 GB. Once the new drive is attached, we need to set it up for use by LVM. That means creating a physical volume on it and then adding it to the volume group. &gt; pvcreate /dev/sdd &gt; vgextend elastic_drive /dev/sdd It’s important to note that we DO NOT want to extend our logical volume onto the new physical volume just yet. Right now, we need that unallocated space in order to clear off the drive we’re going to remove. Now we’re ready to clear off the old drive using pvmove. If you’re curious to know more about how this works, the pvmove man page is really good. &gt; pvmove /dev/sdb When I tried pvmove the first time, I got this error: "mirror: Required device-mapper target(s) not detected in your kernel." This is because pvmove uses the device mapper mirroring module, which isn’t loaded by default. If you get the same error, try loading that module and trying again. &gt; modprobe dm-mirror &gt; pvmove /dev/sdb Now that our physical volume is empty, we can remove it from the volume group: &gt; vgreduce elastic_drives /dev/sdb Note that if you try to call vgreduce on a volume that isn’t empty, it will NOT get removed. Instead, you’ll get a warning telling you that the physical volume is still in use. This should give you some peace of mind, as you can rest assured that vgreduce won’t mess with the integrity of your data. At this point, /dev/sdb is ready to be repurposed onto something else, or destroyed altogether. If you’re planning on adding it to another volume group, it’s ready to be added using the vgextend command. If you plan to use it as a "regular" drive, you’ll first need to remove the physical volume information from it using pvremove. However, if you’re planning on just destroying the volume, all you need to do is go to ElasticFox, detach the volume, and delete it. It should be noted that deleting the EBS volume does not destroy any snapshots that were made from it. This is a good thing, as those snapshots are still a vital part of any backups you’ve made. Should you need to restore your drive from an older point, you can always restore one of those snapshots to another drive. So far, we’ve added our replacement EBS volume, copied some data over to it, and removed/destroyed the volume that we’re replacing. However, we haven’t actually upped the capacity of our logical volume and filesystem, which was the whole point of this exercise in the first place. Let’s do that now. &gt; lvresize -l 100%VG /dev/elastic_drive/content &gt; xfs_growfs -d /var/content And finally, we can check our work for that last bit of reassurance: &gt; df -h Backup Strategies Thus far, we’ve put a lot of effort into creating a flexible, expandable, and accessible file storage device, but there are still three key attributes that we need to address before our drive is ready to use: performance, redundancy, and recovery. I’m moving forward with the assumption that performance of the drive itself is already about as good as it’s going to get. Behind the scenes, Amazon has to be using some pretty hefty hardware, and there’s probably some degree of striping going on as well. Anything we do on top of that is going to introduce a good bit of complexity, and probably won’t yield much, if any, performance gain. Certainly we could run some benchmarks to validate or refute those claims, but at this point in time, I don’t see the need. Redundancy is also something that we’re leaving up to Amazon. As they state in their description of EBS, "Because Amazon EBS servers are replicated within a single Availability Zone, mirroring data across multiple Amazon EBS volumes in the same Availability Zone will not significantly improve volume durability." As such, it’s hard to envision any scenario in which mirroring (or other forms of redundancy like RAID 5) would be worth the trouble. If you disagree with my assessment, there are several tutorials out there that describe combining LVM and various flavors of RAID. The third item that merits discussion is recovery, and that’s definitely something that requires a plan. Amazon has a nice snapshotting feature in place that makes backing up single EBS volumes quick, easy, and inexpensive, but it fails to account for situations like ours, where multiple EBS volumes are directly tied together. Fortunately, we can still take advantage of the EBS snapshots if we do a little bit of legwork before and after. The reason that we can’t rely on random snapshots taken from our EBS drives is because those snapshots are from different points in time, and the filesystem could have been in a different state at each of those points. Therefore, restoring your EBS drives from snapshots taken even a fraction of a second apart could potentially result in unstable behavior. However, there’s no way to guarantee that multiple snapshots are taken at the same time, so if we’re going to use the Amazon snapshot feature, we need to first figure out a way to guarantee that our filesystem remains stable and unchanged across those snapshots. Fortunately, XFS includes a way to make such a guarantee. XFS includes a utility to "freeze" the filesystem at a given point in time. When a filesystem is frozen, all writes that were ongoing before the freeze happened are forced to finish, and all writes that are initiated after the freeze are blocked until the filesystem is "thawed." Any thread attempting to write to the frozen filesystem will simply block until it’s allowed to complete. From a data integrity standpoint, this is just as safe as unmounting the filesystem, and from an application standpoint, it’s much, much better, as applications will now just wait for the filesystem to be available again rather than throwing errors because it appears to be missing. So now, making our backups becomes a relatively simple process. First, freeze the filesystem: &gt; xfs_freeze -f /var/content Next, go to ElasticFox and take a snapshot of each elastic drive. Now go back to your terminal and thaw the filesystem: &gt; xfs_freeze -u /var/content A couple of things to point out here: 1) LVM also has a snapshot feature, but it doesn’t really buy us anything. It requires setting up a second logical volume as a mirror that’s equal in size to the volume we want to back up. Beyond being a giant pain, we’d still also need a way to actually save the snapshots, and that would presumably involve S3. So the end result would be that we’re using twice as much disk space and a significantly more complex setup in order to back up our data to the same place. 2) There is a bit of complexity to restoring from this backup scenario, as you now have multiple snapshots (one for each volume) that represent a single backup. When you do a restore, you’ll have to make sure that you restore all of the snapshots back to the correct drives. This is pretty easy to do at first, as each snapshot includes the Volume ID representing the volume from which it was taken. However, should you ever replace one volume with a new one (as demonstrated in one of the growth strategies), the Volume ID on the snapshot will no longer correspond to the volume that it represents. Therefore, it’s imperative that, whenever you "replace" a volume, you keep a mapping of the old and new Volume IDs somewhere.
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:55am</span>
Welcome back! Yeah, that’s right, it’s been nearly two months since I posted on the blog.  Pathetic, right?  Well, we’ve been writing a lot, it’s just spread out all over the new site.  So, I’ll beg your forgiveness and get on with some new writing. Time has treated us well.  We’ve passed 90 implementation of the SCORM Engine around the world, and frankly that means we’re supporting a ton of SCORM transactions on a daily basis.  We’ve asked a few of our clients (literally, 5 of the big ones) for details on their usage, and we see more than 300,000 registrations per month across the distributed SCORM network.  That’s 10,000 people taking training via the SCORM Engine every day.  We’re pretty psyched about that, but it does come with certain responsibilities. Support As we’ve evolved from a company with a couple of developers who built software to something bigger, we’ve changed our staff to accommodate that.  We now have a "support department" responsible for taking care of our implementations and support requests.  To the degree that it’s possible, we’re making every effort to nip support requests in the bud by answering the questions before they’re asked.  We do this in a few ways: documentation, simple code, compatible code, and now, via a support portal.  We’ve gotten to the point that we’re actually publicly posting answers to questions as they come in.  As a customer, you’re not obligated to search for your answer before you get in touch with us, but you’re more than welcome to do so. Last week, I had an experience with a couple of consumer companies that got me to thinking about what we have to do well. My Support Adventure Last year, we set up everyone at the office with 30″ monitors and Macs.  While I am was not an Apple fanboy generally, I have been converted.  I recently decided I needed to finish the conversion at home and elected to move from a PC to a Mac, and I simply couldn’t do without the 30″ monitor.  So I ordered the Dell monitor (just like at work) and one of the new Mac Minis.  Long story short, the monitor and the Mini arrived, and I set them up excitedly.  I had read rumors of problems with the Mac Mini and the 30″ monitors via the new connector, but figured that would never happen to me.  When I set the pair up and the monitor couldn’t render the image, I was bitter, and got after Apple support on the subject.  In the end, I concluded with the support rep at Apple that it wasn’t a doable configuration, and I was going to be able to return the computer, so it was tolerable.  I would just run out to the local Apple store and buy an older Mac like the ones we have at work, a setup I knew would work because it works at, well, work. Well, I got home with the Macbook Pro (yes, more expensive) and guess what?!  It didnt’t work either.  Ugh.  Ultimately, I took the Mini to work and discovered that it does work in this configuration!  The problem was entirely the Dell monitor at home. So, the calls begin.  I called Dell and spoke to literally 13 support reps in succession, and not once because I asked to speak to a supervisor.  Each was convinced that I had purchased the monitor in some other country and bounced me around because of it.  When I ultimately got to someone willing to try to address my problem, the best he could do we send a replacement "sometime in the next 20 days".  Not one of these people seemed to have any real desire to make my problem go away, nor did they make any effort to understand the problem in any real detail. Compare this to the Apple experience, in which I spoke to a total of two people, and only because I had to call back.  Each of them seemed to want to solve my problem.  (In this case, the problem was that I was trying to return the Macbook Pro because I no longer needed it.  It was, fundamentally, a problem that they hadn’t caused.)  Each of them sought out the part of their organization that could address my registration problems.  They asked if they could put me on hold, they dug through the options, and they got back to me in real time to work through the problem.  Ultimately, I was able to return the extra computer at no cost to me.  This was great, but the lasting impression was they wanted to help me and they were equipped to talk to the right people to solve problems. My Conclusion OK, I totally get that this is nothing new.  I get it.  But this is what I walk away with.  Whether you call us, email us, or create a ticket through the support portal, these are things we’re aspiring to: Access.  The person who receives your request, typically Joe, will have access to people with answers.  We’re loaded with smart people who know our niche products well, and everyone of these people makes themselves available to Joe at the drop of a hat. Desire.  Because we have a niche product, we have to be exceptional.  While LMS vendors see SCORM problems as tangential and annoying, we see them as fundamental. Feel free to call us on this if we fall short.  If you walk away from an incident with us feeling like I did after talking to Dell, I want to know.  If you feel like I did after talking with Apple, then we’re doing OK.  (I wouldn’t mind hearing that either…)
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:54am</span>
The SCORM community is abuzz these days with talk about the security (or lack thereof) in SCORM. As an "alpha-scormmie", I’d like to share some of my perspectives on the issue and try to put things in context. The crux of the issue is that since SCORM communication uses JavaScript in a web browser it is inherently insecure and can be spoofed by any semi-competent web developer who knows a little bit about SCORM. This means that somebody who knows what they are doing can trick an LMS into thinking that a course was completed using some rather simple scripting. The fact that SCORM makes it easy for content to communicate with the LMS also makes it easy for a hacker to maliciously communicate with that LMS. This issue is not new. It has been widely known for quite some time. Fellow alpha-scormmie Tom King has been talking about it a lot recently and even submitted a white paper to LETSI about it. The recent collective panic ensued after a blog post by Philip Hutchison which included a downloadable application that allows anybody to execute the cheat. Apparently, the cat is now out of the bag. Philip’s post caused major LMS vendor Plateau to send an email to its clients warning them of this "new" vulnerability. That, in turn, has led to a frenzy of discussion and a calls to address the problem now, to which ADL has responded with some modest attempts at stopgap solutions. Before we all start to panic, let’s take a step back and look at this with some perspective. [And a note before I go much further...I fully agree that SCORM should be made more secure, please don't doubt that. This post is designed to put security in perspective, calm the panic and ensure that whatever solution we adopt moving forward doesn't sacrifice other important principles in the name of security.] Defining "Secure" Online Training Before we can start talking about how to secure online training, we need to understand what "security" means in this context. There are two primary functions of online training: Delivering instructional content to the learner Assess the learner to ensure that delivered knowledge has been retained That leads to the following definition of "secure online training": Secure online training ensures that the identified learner has "experienced" the intended instructional content and retained the knowledge conveyed therein. Notice that there are three important parts to that definition: "the identified learner" "experienced the intended instructional content" "retained the knowledge" Why Do We Want Secure Online Training? If we want to secure something, there is usually a reason, i.e. something of value we want to protect. There must be something at stake. Online training is typically used in two contexts that provide valuable stakes: Delivering training to meet a compliance requirement. In the compliance context, we want to ensure that the learner "experienced the intended set of instructional content". Instructing learners so that they are competent in new areas and ensuring that they are proficient enough to perform at a level suitable to their task. In this context, we want to ensure that the learner "retained the knowledge". This retention may lead to a promotion or other reward based on his or her new skill. These stakes are certainly high enough to merit attention and to tempt some to cheat the system. Of that there is no doubt. SCORM insiders have often remarked that "SCORM is not intended for high stakes assessment". However, with the proliferation of online training, the stakes are rising. The argument for increased security in SCORM asserts that the stakes are now high enough that this simple dismissal doesn’t suffice anymore. As Tom King said "it’s all low stakes until someone’s attorney gets involved". But, is this a SCORM issue? Perspective - Security in Online Training The "SCORM cheat" allows a learner to spoof two of the three aspects of secure online training. He can assert that he "experienced the desired delivery of instructional content" (i.e. completed the content) when he did not and he can assert that he "retained the knowledge" (i.e., passed the test) when he did not. Let’s pretend for a moment that we implement Fort Knox level security into SCORM. Assume that there is absolutely no way for a malicious user to alter the communications between the content and the LMS. Will we then have achieved secure online training? Will we then have something that is good enough for "high stakes"? Not really. Not in any of the three areas required for secure online training. Most fundamentally, how are we sure that "the identified learner" is the one actually taking the online training? How do we close the security vulnerability of "offering to buy my buddy a pizza" if he will click through my training while he is doing his anyway? How do we ensure that the learner is really "experiencing the intended delivery of instructional content" and not just watching YouTube videos while mindlessly clicking through the content? How do we ensure that the learner has really "retained the knowledge" and isn’t just looking up the answers on Google or asking his buddy in the next cubicle what the test answers are? Online training is an open book test. These are gaping holes in the security of online training. Without the presence of a proctor, there is no way to ensure that the identified learner is actually doing anything of value. These problems are intrinsic to online training, no technical standard can overcome them. If "secure training" is an absolute requirement for an environment with stakes, then online training will never live up to the requirements. Given these fundamental vulnerabilities, how much does the presence of a technical hack move the needle on the overall security of online training? Not very much. As the technological experts and practitioners, we should make every effort to ensure that the solutions we provide are of the highest possible quality, but let’s not loose sight of the larger picture and start to panic. The question becomes "at what level of stakes is the risk of cheating (both technical and non-technical) tolerable?" Certainly we wouldn’t certify somebody to fly a 747 or perform brain surgery based on the result of an online exam. I would venture to say, though, that despite being an OSHA-mandated compliance requirement, the possibility of cheating on annual refresher training about "Preventing Back Injury" is tolerable. Should promotions be based on the results of online training that could potentially be cheated? Not solely, but if potentially compromised training results contribute to an employee’s evaluation it certainly won’t be the only, or most damaging way for employees to game the system. There has been some implication that the "public" is unaware of the SCORM security issue and won’t tolerate this vulnerability. My perception is different…although admittedly it is just my perception and certainly there are many people out there for whom this opinion is valid. I have two pieces of evidence to support my theory though. The first is the proliferation of secure assessment tools. Vendors like Questionmark specialize in offering these tools and most major LMS’s offer their own built-in secure assessment framework. I see the demand for these tools as acknowledgment that assessment provided by the content is not always valid. Secondly, I know that the public is aware of the non-technical potential cheats mentioned above. If the public judges those risks as being acceptable for their current stakes level, it seems to me that the possibility of a SCORM cheat would also fall into the level of acceptable risk. [Again, I want to say that I do think we should make SCORM more secure. Simply because it is not strictly necessary doesn't mean we should be complacent or do less than our best. We do need to consider security in its larger context and measure it's worth against other important, and often competing, design aspects.] An Analogy There have been several comments to the effect of "if we can conduct very high stakes operations like finance electronically, then surely we can secure training". It’s true, we can conduct very high stakes operations online, but that’s not to say that they are invulnerable. They are simply strong enough that the utility of their use significantly outweighs the pain caused by their misuse. This is the heart of security. There has to be enough security to prevent rampant misuse without overly interfering with the mainstream legitimate use. SCORM is all about interoperability. It is about the seamless and easy transfer of data between different systems. It reduces the friction of transactions between parties and allows almost all learning systems to work together. What is the "SCORM of the financial industry"? What is the de facto tool for easily and seamlessly transferring money between parties? What reduces the friction of everyday financial transactions and works virtually anywhere? It is the credit card. Are credit cards completely secure? Nope, not by a long shot. In fact, I’d venture to say that SCORM is less vulnerable to misuse than a credit card is simply because of the technical knowledge required to perform the misuse. Credit cards are central to the financial system, and money is by definition "high stakes". There is huge temptation to cheat the system. Yet, all I need to cheat the credit card system is the numbers and name on the front of your card. Well, of course, for those really secure credit card systems, I also need the 3 numbers from the back of your card…and, of course for those really, really secure online systems, I also need to know your billing address (there’s no way the waiter who swipes your credit card at Chilli’s will ever be able to find that in the phone book, right?). Of course credit cards aren’t completely secure. Credit card fraud happens all the time. In any system there are trade offs. One well known and intractable trade off is the balance between ease of use/convenience/low friction/interoperability and security. In the financial industry, the market has decided that the price of insecurity (fraud by a few) is worth the added convenience of credit cards. The utility of the credit card overcomes the cost of the fraud. It is the same way with SCORM. The utility of SCORM’s ease of use, interoperability and portability outweigh the possibility that some people will cheat the system. Adding any kind of real security to SCORM would negatively affect the ease of use (i.e. development) and portability SCORM content. In light of the inherently insecure environment in which SCORM operates, this doesn’t seem like a bad design trade off. What Should Be Done Today? The SCORM vulnerability arises from the very essence of the standard. The ECMAScript API implementation is fundamentally exposed to the web browser and as such can not be made secure from a web developer armed with a tool as common as Firebug. Any change to the specifications that retains the ECMAScript API will only very marginally improve security while (potentially) inflicting large amounts of pain on implementers. There is no modification to the specifications that I am aware of or can imagine that will "move the needle" significantly enough to warrant any change of implementation. But there is something that we as a community can do to improve security. All secure systems have two parts. The first prevents a security breach (the lock on the door). The second detects a security breach when it happens to enable corrective action (the burglar alarm). The solution to SCORM security in today’s world isn’t to add a more complex lock, it is to add a burglar alarm. Making the lock more secure makes it harder to get in and out of the door every day. A more secure lock decreases the usability of the system and doesn’t add significantly to security (especially when there’s an easily breakable window right next to it). The solution to "SCORM fraud" (or more broadly "online training fraud") is the same solution that the credit card industry uses, monitoring and detection. Credit cards could be made exceptionally secure, but their utility would go down significantly. Instead the industry has focused on detecting fraud, stopping it once it starts and punishing offenders. The training industry can do the same thing with SCORM; we just need to acknowledge that some fraud it is going to happen. Misuse is an inherent part of any system. This monitoring can be done with data that is already tracked in LMS’s and for which many LMS’s even have existing reports. The appropriate security response from ADL is to issue a set of guidelines for LMS vendors that allow them to create "cheating alerts" that can be provided to concerned clients. Some simple yet powerful heuristics that can be easily monitored right now include: Comparing the session time reported by the SCO to the actual time that the learner spent in the SCO. Comparing the actual time it took the learner to complete a SCO against both the typical learning time defined in the metadata (if present) and the average time it has taken other learners to complete the same SCO. Comparing the run-time data reported by the SCO against the run-time data reported by other instances of learners taking the SCO. Specifically, the LMS could look at the number of interactions reported and the identifiers of these interactions to ensure they match expectations. Instances of SCOs that report both completion and satisfaction when other instances only report completion, or other abnormal combinations of data model element usage. Furthermore, ADL could provide similar guidelines to content developers to encourage them to use all of the appropriate data model elements and metadata elements to enable LMS vendors to detect fraud. These solutions certainly aren’t bulletproof. But they "move the needle" quite a bit without significant rework. Most importantly, for those who are happy with the status quo, no changes are needed, the specification remains the same. Changes like these allow those who need more protection to do a little work to achieve that protection, but do not adversely affect the masses of current adopters who are content. ADL or the industry could even go a little bit farther and come up with a "More Secure SCORM" profile of SCORM. I wouldn’t suggest that this include the dramatic changes required for real security, but it could include some a simple change like a new metadata element that defines a secure token that the content will set the value of cmi.location to when it "really" is completed. This solution is still vulnerable, but it moves the needle a bit farther in the right direction. What Can Be Done in the Future? SCORM is due for an overhaul, that’s common knowledge and good people are working on the problem. The ECMAScript API is likely to be augmented with another communication scheme soon for many reason, including security. There are a lot of design aspects that must be considered when creating the next generation of SCORM; security is but one. As a community, we need to decide how much of a priority to give security in relation to other conflicting design aspects. As I’ve already mentioned, given the inherent insecurity of online training, it’s not a bad idea to sacrifice technical security for big wins in other areas like interoperability. But that’s not to say that we can ignore it. On a security scale of 1-10 that I have just completely arbitrarily defined, I would arbitrarily give SCORM about a 3-4. If we were to bump it up to a 8-9, it would probably introduce too much friction in other areas to be considered a good design decision. Somewhere around a 5-6 is probably about right. We want a standard that allows for frictionless interoperability and we need to be willing to sacrifice some security to achieve it. Ideally an interoperability standard eliminates friction altogether, but enables those who are willing to put up with some friction in the name of security to do so. Fundamentally, to be secure, we need to move communication out of the browser because anything in the browser sandbox is hackable. This move implies the existence of server-side code. Server-side code is an enormous barrier to portability. A security solution needs to enable those who are willing to sacrifice portability for security to do so, but not remove portability for those who are willing to sacrifice security. A web services communication scheme that relies on a shared secret between the content and LMS is a very promising path forward and there is work being done in this arena already. This type of solution allows for hosted content to use server-side code to securely communicate while also allowing portable content that runs only in the browser to communicate (albeit in a less secure way).
Rustici Software   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 26, 2015 07:54am</span>
Displaying 14905 - 14928 of 43689 total records