One "VistA on Linux/GT.M" installation and management mechanism

Every VistA variant has its strengths and weaknesses. One of OpenVistA's is its straightforward and open source [[http://medsphere.org/community/project/gtm/blog/2010/01/13/install-openvista-in-about-10-minutes-updated|install procedure]]: * apt-get the OpenVistA environment and download the zip with OpenVistA's globals and routines * command-line utilities that create directories, compile code and setup instances. * the only annoying thing is that you have to manually reset the value of ''box pair'' in ''TaskMan Site Parameters". then you just do "/etc/init.d/openvista start " and all is well. WorldVistA's procedure is much more trying but thankfully you can make WorldVistA run in the OpenVistA environment: * download the zip with WorldVistA's globals and routines * create a new VistA instance with the OpenVistA's command line utility, ovinstance add, but instead of pointing to OpenVistA's globals and routines, point to WorldVistA's. There's one gotcha: WorldVistA ships globals in a .dat, OpenVistA uses (the more portable?) .zwr format. * again you need to manually reset "box pair" * BUT you also have to '''manually''' delete the contents of the Taskman global! * AND WorldVistA doesn't use MSC's add-on to GT.M to make its sockets behave like Cache's and so you must configure and start xinetd to run the RPC Broker with this done, you type the (incongruous) "/etc/init.d/openvista start worldvista" and an instance of WorldVistA called worldvista comes to life. I presume but haven't yet tested that FOIA would run in MSC's environment in the same way. I think that there should be one environment on Linux+GT.M for all VistAs in preparation for the setup of one unified VistA. I can't see a need for difference in something this rudimentary. Nor do I think it should be an involved process. My vote is that OpenVistA's setup forms the basis of a (converged) "installation and management environment" but perhaps others know of better setups than MSC's?
like0

Comments

One "VistA on Linux/GT.M" installation and management mechanism

DAVID Whitten's picture

On Fri, Feb 3, 2012 at 10:19 AM, David Whitten <whitten@worldvista.org>wrote:

> As an aside, Astronaut has a program (which I wrote for Ignacio) that will
> setup the box-volume pair.
> How does the TaskMan global get handled by OpenVistA ?
>
>
> On Thu, Feb 2, 2012 at 11:48 PM, conordowling <conor-dowling@caregraf.com>wrote:
>
>> Every VistA variant has its strengths and weaknesses. One of OpenVistA's
>> is its straightforward and open source install procedure<http://medsphere.org/community/project/gtm/blog/2010/01/13/install-openv...
>> :
>> * apt-get the OpenVistA environment and download the zip with OpenVistA's
>> globals and routines
>> * command-line utilities that create directories, compile code and setup
>> instances.
>> * the only annoying thing is that you have to manually reset the value of
>> ''box pair'' in ''TaskMan Site Parameters".
>> then you just do "/etc/init.d/openvista start " and all is well.
>>
>> WorldVistA's procedure is much more trying but thankfully you can make
>> WorldVistA run in the OpenVistA environment:
>> * download the zip with WorldVistA's globals and routines
>> * create a new VistA instance with the OpenVistA's command line utility,
>> *ovinstance add*, but instead of pointing to OpenVistA's globals and
>> routines, point to WorldVistA's. There's one gotcha: WorldVistA ships
>> globals in a .dat, OpenVistA uses (the more portable?) .zwr format.
>> * again you need to manually reset "box pair"
>> * BUT you also have to '''manually''' delete the contents of the Taskman
>> global!
>> * AND WorldVistA doesn't use MSC's add-on to GT.M to make its sockets
>> behave like Cache's and so you must configure and start xinetd to run the
>> RPC Broker
>> with this done, you type the (incongruous) "/etc/init.d/openvista start
>> worldvista" and an instance of WorldVistA called *worldvista* comes to
>> life.
>>
>> I presume but haven't yet tested that FOIA would run in MSC's environment
>> in the same way.
>>
>> I think that there should be one environment on Linux+GT.M for all VistAs
>> in preparation for the setup of one unified VistA. I can't see a need for
>> difference in something this rudimentary. Nor do I think it should be an
>> involved process.
>>
>> My vote is that OpenVistA's setup forms the basis of a (converged)
>> "installation and management environment" but perhaps others know of better
>> setups than MSC's?
>> --
>> Full post:
>> http://www.osehra.org/discussion/one-vista-linuxgtm-installation-and-man...
>> Manage my subscriptions:
>> http://www.osehra.org/og_mailinglist/subscriptions
>> Stop emails for this post:
>> http://www.osehra.org/og_mailinglist/unsubscribe/492
>>
>
>

like0

environment setup: zwr, MSC's GT.M extension and Taskman

conor dowling's picture
Dave, a script to setup TaskMan parameters is exactly what should be in a cross VistA setup (for OV: add to OVINSTANCEADD) On MSC's extension to GT.M, Jon Tai wrote: "Native GT.M lacks the ability for the parent mumps process to pass the socket onto the JOB'd child. The official GT.M answer to this problem is to use a daemon dedicated for the purpose such as xinetd instead of the mumps listener process. While this makes a ton of sense from a UNIX point of view (do one thing and do it well, right?), it makes managing a VistA system much more complicated because now the RPC broker file (where VistA admins are used to configuring things) doesn't really do anything, and the RPC broker menus don't do anything, either. The VistA admin must know to go out to the Linux system and start/stop/reconfigure xinetd, and he requires root access to do so. So, as part of the OpenVista/GT.M integration project, we wrote a small C library for GT.M that allows the socket passing. With a minimal code change to the RPC broker (call our C library instead of JOBing directly), the broker now functions exactly as it does on Cache, so all the configuration files and menus still work. WorldVistA doesn't use this library; it expects you to set up xinetd." Is this the sort of "GT.M portability" change that you'd put into WorldVistA? And one other question: why does WorldVistA ship a .dat for globals but MSC ships a .zwr - is ZWR more portable? Conor
like0

One "VistA on Linux/GT.M" installation and management mechanism

George Lilly's picture

Conor:

Actually, there's a lot to be said for using xinetd instead of rolling your
own in a GT.M VistA installation. Especially with the increasing use of the
internet for connectivity.

Here's some interesting factoids.

http://www.xinetd.org/faq.html

I don't know what the right answer is, but the GT.M philosophy of letting
linux do the things it does well (security, tasks, first level access
control, etc.) has made for very robust and scalable VistA configurations.

So we should consider this one carefully.

gpl

On Fri, Feb 3, 2012 at 8:08 PM, conordowling <conor-dowling@caregraf.com>wrote:

> Dave,
>
> a script to setup TaskMan parameters is exactly what should be in a cross
> VistA setup (for OV: add to OVINSTANCEADD)
>
> On MSC's extension to GT.M, Jon Tai wrote:
> "Native GT.M lacks the ability for the parent mumps process to pass the
> socket onto the JOB'd child. The official GT.M answer to this problem is to
> use a daemon dedicated for the purpose such as xinetd instead of the mumps
> listener process. While this makes a ton of sense from a UNIX point of view
> (do one thing and do it well, right?), it makes managing a VistA system
> much more complicated because now the RPC broker file (where VistA admins
> are used to configuring things) doesn't really do anything, and the RPC
> broker menus don't do anything, either. The VistA admin must know to go out
> to the Linux system and start/stop/reconfigure xinetd, and he requires root
> access to do so. So, as part of the OpenVista/GT.M integration project, we
> wrote a small C library for GT.M that allows the socket passing. With a
> minimal code change to the RPC broker (call our C library instead of JOBing
> directly), the broker now functions exactly as it does on Cache, so all the
> configuration files and menus still work. WorldVistA doesn't use this
> library; it expects you to set up xinetd."
>
> Is this the sort of "GT.M portability" change that you'd put into
> WorldVistA?
>
> And one other question: why does WorldVistA ship a .dat for globals but
> MSC ships a .zwr - is ZWR more portable?
>
> Conor
> --
> Full post:
> http://www.osehra.org/discussion/one-vista-linuxgtm-installation-and-man...
> Manage my subscriptions:
> http://www.osehra.org/og_mailinglist/subscriptions
> Stop emails for this post:
> http://www.osehra.org/og_mailinglist/unsubscribe/492
>

like0

SemiVivA installations of VistA

K.S. Bhaskar's picture

There are good reasons why a package such as a Debian package may not be the best way to package VistA, chief among which is that you can't install an updated package and have all your environments updated.  See the discussion thread "How Debian Packaging practices could apply to VistA maintenance and distribution" stared by Luis Ibanez and to which I contributed at http://lists.debian.org/debian-med/2012/01/threads.html#00336

I used to package VistA+GT.M as tarballs called SemiVivA packages which you simply download and unpack.  I may start creating them again, but will probably separate VistA from GT.M.

For what it's worth, I think setting up VistA to start up server processes in response to connection requests at ports under the control of an Internet superserver like xinetd is the way to go.  That's what they were designed to do.  xinetd does what it is supposed to do - including security and logging functions - and then gets out of the way leaving the server process with a client connection.  In GT.M we try to do well what we can do well, and to use other tools that do well what they do well.

 

like0

One "VistA on Linux/GT.M" installation and management mechanism

conor dowling's picture

Bhaskar,

1. XINETD "vs" RPC Broker Options
I can see the advantage of XINETD and I think Jon's comment from the
Medsphere site reflects it too. But also, as Jon says, VistA has a series
of menus and Broker options for managing connection requests. If you force
the xinetd route and don't support "job forking" (sic) in GT.M then that
function doesn't behave as advertised in the VistA documentation. If
there's a consensus for XINETD then any VistA running on GT.M should NOT
provide these menus.

2. On tar balls: I prefer this simple option too. Or at least think it
should be offered in addition to any other OS-specific bundle. This doesn't
change wanting to have common startup, shutdown, backup etc utilities. One
thing such utilities should do is remove the need to setup any basic system
configuration by setting or killing globals!

There's a middle way between packaged Virtual Machines and "in the weeds,
raw setups" and I think MSC's is the closest out there to it,
Conor

On Sat, Feb 4, 2012 at 4:16 PM, bhaskar <ks.bhaskar@fisglobal.com> wrote:

> There are good reasons why a package such as a Debian package may not be
> the best way to package VistA, chief among which is that you can't install
> an updated package and have all your environments updated. See the
> discussion thread "How Debian Packaging practices could apply to VistA
> maintenance and distribution" stared by Luis Ibanez and to which I
> contributed at
> http://lists.debian.org/debian-med/2012/01/threads.html#00336
>
> I used to package VistA+GT.M as tarballs called SemiVivA packages which
> you simply download and unpack. I may start creating them again, but will
> probably separate VistA from GT.M.
>
> For what it's worth, I think setting up VistA to start up server processes
> in response to connection requests at ports under the control of an
> Internet superserver like xinetd is the way to go. That's what they were
> designed to do. xinetd does what it is supposed to do - including security
> and logging functions - and then gets out of the way leaving the server
> process with a client connection. In GT.M we try to do well what we can do
> well, and to use other tools that do well what they do well.
>
>
> --
> Full post:
> http://www.osehra.org/discussion/one-vista-linuxgtm-installation-and-man...
> Manage my subscriptions:
> http://www.osehra.org/og_mailinglist/subscriptions
> Stop emails for this post:
> http://www.osehra.org/og_mailinglist/unsubscribe/492
>

like0

Distinguishing between functional and operational characteristic

K.S. Bhaskar's picture

 

First a bit of a philosophical ramble.  While recognizing that there is a gray area rather than a clean separation, I think it is important to acknowledge the difference between functionality and operational characteristics of an application like VistA.  Since MUMPS implementations are not commoditized, the goal of VistA portability should be common functionality across MUMPS platforms, while allowing each VistA implementation to avail of the operational benefits of its underlying platform.

Since the VA uses only one MUMPS platform, VistA from the VA does combine functional and operational configuration without differentiating between them.  But that is not the case with the VistA community at large.  A true story: when a friend many years ago bought her first cordless phone, her husband noticed that she always stood near the base when talking.  Only when he pointed it out to her did she have an "aha" moment and start walking around the house with the handset.

More below - look for [KSB].

Regards
-- Bhaskar

On 02/04/2012 07:29 PM, Conor Dowling wrote:
Bhaskar, 1. XINETD "vs" RPC Broker OptionsI can see the advantage of XINETD and I think Jon's comment from the Medsphere site reflects it too. But also, as Jon says, VistA has a series of menus and Broker options for managing connection requests. If you force the xinetd route and don't support "job forking" (sic) in GT.M then that function doesn't behave as advertised in the VistA documentation. If there's a consensus for XINETD then any VistA running on GT.M should NOT provide these menus.

[KSB] Since these discussions will be archived and read by many who are not familiar with MUMPS implementations, I think it is important to note that we are not talking about the MUMPS JOB command which both platforms support.  Rather it is the way incoming requests are handled - whether by an Internet superserver like inetd/xinet (GT.M) or by spawning a child process for each connection request (other MUMPS implementations).

I agree that when VistA is deployed on GT.M,  operational configuration options handled at the MUMPS platform level or OS level should not be presented to the person configuring VistA.  This should go on the list of changes to be made.  It's not critical in the short term, but it would make VistA more user friendly in the long run.

 2. On tar balls: I prefer this simple option too. Or at least think it should be offered in addition to any other OS-specific bundle. This doesn't change wanting to have common startup, shutdown, backup etc utilities. One thing such utilities should do is remove the need to setup any basic system configuration by setting or killing globals! There's a middle way between packaged Virtual Machines and "in the weeds, raw setups" and I think MSC's is the closest out there to it,

[KSB] I agree.  While a small number of  people will need the weeds, the majority do not.  Also, while configuration will involve setting / killing global variable nodes, this should be done through an API / menus rather than directly.

One important thing is that with one VistA installation on a machine, there can be multiple VistA environments on that machine.  Startup, shutdown, backup, etc. scripts are properties of each environment whereas the code base is largely a property of the VistA installation (although there can of course be environment specific code).  One area where MUMPS platforms differ is the level of separation / sharing between environments on the same machine.

Conor

On Sat, Feb 4, 2012 at 4:16 PM, bhaskar <ks.bhaskar@fisglobal.com> wrote:

There are good reasons why a package such as a Debian package may not be the best way to package VistA, chief among which is that you can't install an updated package and have all your environments updated.  See the discussion thread "How Debian Packaging practices could apply to VistA maintenance and distribution" stared by Luis Ibanez and to which I contributed at http://lists.debian.org/debian-med/2012/01/threads.html#00336

I used to package VistA+GT.M as tarballs called SemiVivA packages which you simply download and unpack.  I may start creating them again, but will probably separate VistA from GT.M.

For what it's worth, I think setting up VistA to start up server processes in response to connection requests at ports under the control of an Internet superserver like xinetd is the way to go.  That's what they were designed to do.  xinetd does what it is supposed to do - including security and logging functions - and then gets out of the way leaving the server process with a client connection.  In GT.M we try to do well what we can do well, and to use other tools that do well what they do well.

 

--
Full post: http://www.osehra.org/discussion/one-vista-linuxgtm-installation-and-man...
Manage my subscriptions: http://www.osehra.org/og_mailinglist/subscriptions
Stop emails for this post: http://www.osehra.org/og_mailinglist/unsubscribe/492

-- GT.M - Rock solid. Lightning fast. Secure. No compromises.

 

like0

The OpenVista Integration

Ben Mehling's picture

The OpenVista Integration package includes utilities for creating, removing, backing-up and restoring VistA instances. The latest code base includes tools for managing replicated pairs (GT.M (for OpenVista), MySQL(for Mirth Connect), and Filesystem sync (for M source and Images Repository)) from a single "control panel". It also included numerous fixes to VistA-code to eliminate errors caused by Cache-specific or unwrapped Cache calls.

The philosophy of the packaging was to create a series of utilities that any linux administrator would be comfortable using AND any VistA administrator would essentially not notice. While the discussion thus far has centered around RPC, the network layer change also affects the HL infrastructure, the UI/LEDI package, etc. Managing 10's of interfaces across multiple namespaces/UCIs is natural and intuitive, using the native VistA menus vs. creating new scripts everytime an HL interface is created (as an example).

The bottom line was, any utilities Medsphere created (and have continued to improve upon) were focused on operational efficiency since Medsphere and its customers use these tools on a daily basis. The tools needed to work intuitively and correctly. From Medsphere's perspective the existing state of the art (shell scripts, xinetd, etc.) leaves far too many sharp edges to cut the system administrators.

like0