- Product Trial
I just got back from an exhausting but very enjoyable 5 day trip to the Bay Area where, as usual, I crammed in as many activities and meetings as possible.
I started out visiting a couple of colleges with my college-bound daughter who is planning to major in Biology (I’m sure that she’ll also be taking some chemistry courses as well). Then I visited friends, customers and prospects in my old haunting grounds (I lived in Silicon Valley most of the 80’s and the early 90’s). On Monday night one of my most senior employees drove 3 hours from Paradise (a small lovely town with a very cool name in the foothills of the Sierras) to have dinner with me in Fisherman’s Wharf. We took a cable car (his first in 30 years) to get from downtown to the Wharf area.
Lest I forget to mention, I did manage to squeeze in an afternoon this past Tuesday (April 4, 2017) at the 253rd National Meeting of the American Chemical Society (ACS). Let me digress for a minute. Speaking at the 253rd meeting of ACS got me curious as to when and where ACS held its first such meeting. So late Friday afternoon/evening I recruited Grace, Chemistry and Chemical Engineering Librarian at Stanford University to help me answer this question. Although ACS was formed in 1874, the first of these twice a year meetings wasn’t held until August 6-7, 1890 in Newport, RI.
Back to my talk, I was invited to present the paper – Unique One Stop Access to a Multitude of Chemical Safety Resources to a workshop put on by CHAS (Chemical Health and Safety) Division of ACS. The paper summarized and demonstrated two gateways (a Stanford version and a publicly available version) that my company developed, working closely with Grace, that aggregate Chemical Safety information.
Please check out the public gateway at:
and do send me feedback through the blog on how we can improve the gateway.
Finally, as I have been reflecting on the work in Chemical Safety that we’ve done it’s become clear that what we’ve done is most of the way towards being a powerful resource to help accelerate Chemical Research in general.
Congratulations to Dr. Ellen Wilson (Ellee), our VP of Professional Services and Engineering, and a new PhD recipient. Ellee walked the PhD ceremony at her alma mater, Pacifica Graduate Institute, on Sunday, May 29, no doubt breathing a huge sigh of relief as she wrapped up this milestone achievement.
Deep Web Technologies hires smart people; we have smart engineers, smart project managers, smart leaders. But Ellee just may be at the top of the list. While her day job may be to guide the ebb and flow of DWT’s team, projects, and software development, her real passion is depth psychology, the exploration of theories and ideas delving into the relationship between consciousness and the unconscious.
Ellee’s dissertation didn’t just add a new set of letters behind her name; her line of exploration is unique to the realm of depth psychology. She presents credible research and ideas about how the world imprints itself on people, tying the order of mathematics throughout history and the present day to the chaos of a more sensory-based world. Are you interested how the development of non-Euclidean geometry created a diversion in the linear focus at the time and contributed to a multiplistic expression of human thought and experience? So is Dr. Ellen Wilson.
Don’t think that receiving her PhD is the end of the line for Ellee. From a very young age, Ellee has been driven to explore what makes us tick. She has multitudinous degrees, ranging from Women’s Studies, Mathematics, and Computer Engineering to Library and Information Science, and now, Depth Psychology. Her PhD may just be her intellectual fulcrum, harnessing her past and funneling it into a rich, erudite future.
DWT is proud to have Ellee on our staff.
I recently had a conversation with a VC and he brought up the acronym “SMAC”. SMAC, he explained, stands for Social, Mobile, Analytics and Cloud, and pointed out that these four areas are red-hot with investors now.
In a Forbes, May, 2014 blog article, Ravi Puri, Senior Vice President, North America Oracle Consulting Services defined SMAC and talked about: “The convergence of these trends is creating a coming wave of disruption that will let companies drive improved customer satisfaction, sustainable competitive advantage and significant growth in enterprise value—but only if you are ready for it.”
More recently Casey Galligan, Morgan Stanley Wealth Management Market Strategist, advises investors to not shy away from this sector and invest in leading SMAC companies and writes: “We believe that companies levered to these key secular growth areas will continue to be differentiators.”
It is an exciting time to be Deep Web Technologies, as we have been working in a number of these areas for a while now and are poised to make significant contributions to advance the state-of-the-art of all SMAC technology areas directly and through partners in the years ahead. Let me give you some examples:
- Social – At its heart, Explorit Everywhere! connects people to information. That’s one reason that Explorit Everywhere! naturally integrates well with social networking sites. These sites offer rich information to end-users in the form of opinions, rants, new developments, scientific breakthroughs and more. An organization may have a variety of social networks supporting their philosophy and marketing their brand, such as Twitter, Facebook, LinkedIn, Pinterest, and blogs. These social networks are plenty rife with interesting and useful tidbits for marketing folks, researchers, students and other professionals alike. Explorit Everywhere! can search all of these networks for relevant information in five seconds or less. To follow things up, Explorit Everywhere! lets the user share what they’ve found back to their own networks, completing the number one rule of thumb for social networks: share and share alike. Social integration engages users and simplifies the searching and posting to multiple networks by social networking users.
- Mobile – The mobile wave is more than just a fad; it’s the future. As we mentioned in our previous post, Explorit Everywhere! Goes Mobile, when we reach the year 2020 we may see around 50 billion connected devices slinging information around the world. When it comes to mobility, we needed Explorit Everywhere! to be flexible and device-driven, with an ultra-sleek user interface. Advances in mobile technology require that we stay up-to-date, and Explorit Everywhere! accomplishes this through its use of responsive design and vigilance of new devices searching our application.
- Analytics – Explorit Everywhere!’s statistics package has been collecting usage statistics for years now which enable our clients to maximize the ROI of the content that they license. Deep Web Technologies is an expert at gathering information from multiple sources, aggregating the results and categorizing them into concepts that expand the breadth of a researcher’s information. But even beyond that, Explorit Everywhere! can feed collected, pinpoint information it retrieves into best-of-breed analytical tools and software for further filtering and sifting. Explorit Everywhere! complements big data dashboards by funneling a broad swath of relevant material down the pipe for further analysis. On the front-end Explorit Everywhere! can also enhance what the user sees in the dashboard with complementary information drawn from a variety of sources, both internal and external to an organization.
- Cloud – Enterprise search is moving toward the cloud, and with that comes silos of information lost in the cloud. Explorit Everywhere! performs a real-time search, of multiple databases across multiple clouds of information together with information residing in Corporate silos that have not been moved to the cloud. These clouds may be behind a firewall, or outside of the firewall, but often stump indexers due to the nature of resources. Explorit Everywhere! connects to the databases wherever they are making the world a much smaller place.
Explorit Everywhere!’s integrated SMAC features create a holistic search experience, ensuring that our clients are at the forefront of technology, and not trailing behind the curve. With the best of this generation and next-generation technology, Explorit Everywhere! clients are part of the changing technology scene. We’re riding not just the mobile wave, but regularly improving connections to social networks, tuning our analytics and simplifying our cloud-based technology. And, the process of finding the most current information will shift as the future unfurls. Explorit Everywhere! will leverage SMAC and other next-generation technologies to embrace new concepts, connect with data wherever it may sit, and engage our users. Explorit Everywhere! is state-of-the-search.
March 20th marked the first day of spring. Here in northern New Mexico we have seen signs of spring (and allergies) for over a month. The crocus stretched out of the soil in February marking both a celebratory moment for my family, and one of concern. The weather is already warm and beautiful causing the apricot, plum, and juniper trees to bloom like mad. But because they’ve bloomed so early, will a late freeze wipe out our delicate fruit? And will we all sniffle and sneeze longer from the thick pollen collecting on our cars and sidewalks?
My questions took me to three different federated search engines to see if I could see what “spring” topics were circulating.
On Biznar, a social media and business search engine, I couldn’t help but search out how others were handling their spring allergies. Some dive into the Claritin box, while others go for a Kettlebell workout. My family claims to have zero allergies, although we slyly keep a tissue box handy once the juniper pollen begins to circulate. However, it looks like some research indicates that dairy may offer relief. I shall eat more yogurt from here forth.
Speaking of pollen, Environar, a federated search portal dedicated to life, medical and earth sciences, had excellent research on pollen through the ages. Pollen has been used to document climate cycles, and indicate many other factors such as temperature and precipitation during the past 140,000 years or so. Pollen, atchoo!, is scientifically important.
I particularly enjoyed browsing the government portal, Science.gov, on the effects of climate change on allergies. I thought this interesting from the Annals of the American Thoracic Society found in PubMed regarding a survey on climate change and health: “A majority of respondents indicated they were already observing health impacts of climate change among their patients, most commonly as increases in chronic disease severity from air pollution (77%), allergic symptoms from exposure to plants or mold (58%), and severe weather injuries (57%).” I shall buy more tissue.
While my questions may not have precise answers, I can at least plan ahead at the grocery store when I see high pollen counts – yogurt and tissues. And perhaps I’ll have a new appreciation for the contributions pollen has made to our scientific community.
Explore your Pollen Allergy Forecast at Pollen.com: http://www.pollen.com/allergy-forecast.asp. Happy Spring and Happy Searching!
Google has completely changed the way most of the civilized world gets its information. Most know that Google wasn’t the first, but thanks to their effective branding, few realize that Google isn’t the best. For a long time, I’ve made the claim that Google will not be remembered as the greatest technology company of all time, but the greatest marketing company. The name Google was inspired by the term “googal”, which means the number 1 followed by 100 zeros, and was intended to refer to the number of results that are returned with each search. However, this philosophy is diametrically opposed to first 30 years of online information retrieval, when librarians were trained to create search queries that were very specific, so that only a few search results would be returned. If too many results were delivered, librarians considered it a bad search, because the large set of results were too difficult to manage. Effective research was about accuracy, not quantity. Like Heinz, which turned a huge problem, i.e. the ketchup wouldn’t come out of the bottle, into a marketing success, Google has convinced the world that large numbers of search results are a good thing. But the fact is when it comes to search, more is not better. Google is a victim of its own success. As more and more content pours into the Google index, search results are as diverse as they are voluminous.
Make no mistake, Google’s search technology is significantly improved, but the problem is that its index is growing at a rate of 100% per year. It’s too broad, covers too many subject areas, and it is too dependent on the most popular links. This is because Google is intended to be all things to all people. If I want to see the menu of a Chinese restaurant , or the show times for a movie, or the hours of operation of my nearest Target, or the phone number of my optometrist, there is nothing better than Google. But if you want to do serious research, whether it’s chemical engineering or art history, Google should never be your first choice. Unfortunately, Google has become the first choice for many professionals. While hospitals spend hundreds of thousands of dollars per year on peer reviewed medical information, 83% of physicians go to Google first to do medical research. While academic libraries spend tens of thousands on the finest collections of digital content, students choose Google as their first and only source of information for research.
Deep Web Technologies has spent the past 15 years aggregating content in real time, in specific subject areas, so that users could find the information they need quickly and effectively. We enable users to simultaneously search hundreds of content repositories specifically related to their subject discipline, delivering the most relevant results from the latest publications. So, when veterinarians are researching jaguars, their result sets don’t include articles about automobiles or football teams. With Deep Web Tech, in addition to getting only relevant information, users get the most current information that has been published. DWT’s search technology does not require indexing, as our technology accesses the original source of the content. As soon as it is published, it is accessible to DWT customers. Just as important is the access to multiple sources from a single interface, which enables articles from other sources of content to be compared, side by side, without jumping from one site to another.
Google has become the most popular search engine in the world. But popularity doesn’t always translate into quality, just take a look at prime time television.
The other day Abe received in the mail the document from the US Patent and Trademark Office (USPTO) granting us a trademark and service mark on the multilingual version of Explorit®, our federated search system. As any of you who have filed patents, trademarks or service marks surely know, the process is arduous and time consuming. So, needless to say, we’re delighted to have received the USPTO document.
Curious to see when we first began promoting Explorit®, I took a journey back in time, courtesy of Archive.org, aka the Wayback Machine. Deepwebtech.com was first crawled by Archive.org on August 2, 2002.
This was our original logo:
And, here’s a piece of the original description of Explorit®:
Explorit provides the capability to deploy small to large-scale collections of information on the web – fully searchable and easily navigable – to a wide range of user communities. Large organizations or information purveyors with many collections of heterogeneous information benefit from the consistency and usability of the Explorit user interface: whether they deploy one collection or one hundred, users quickly learn that all Explorit applications operate essentially the same way, and variances are determined by content rather than inconsistent design.
While Explorit® has greatly evolved over the past ten years some things never change. Yes, the architecture, the user interface, and the back end software were completely rewritten years ago to exploit modern programming technologies and web services standards. And, yes, the features have evolved to keep up with market demand. But, the values which drive the development of our software hasn’t changed. Explorit® has been and always will be about helping libraries and research organizations to mine the deep web for the most useful information from dozens or even hundreds of high quality sources.
We’re proud to have that piece of paper; we’ve framed it. But, more important than the document is what it represents – a commitment to serving research by being on the leading edge of information retrieval.