Skip navigation

GIS Services

5 posts

Everyone knows about the fact that they need some extra configuration in Oracle, to work with ST_GEOMETRY outside ArcCatalog/ArcMap. The confusion comes about as to what is considered "outside ArcCatalog/ArcMap" and what if ST_GEOMETRY is not their feature shape storage type. This blog post will hopefully help answer some of these questions and also help you with setting up your Oracle geodatabase to work with ST_GEOMETRY.

 

I would also like to mention that all the information stated here is published by ESRI in various KB articles, documentation, help resources, etc. I am not stating anything new, just consolidating the information to present it in one location for anyone interested.

 

If you never interact with your geodatabase outside ArcCatalog/ArcMap, then you have nothing to worry about (but come on, when is that really true).

If your feature class storage data type is ST_GEOMETRY and you want to do any sort of spatial queries and/or modify data outside of ArcMap, you will need to set up your geoadatabase for ST_GEOMETRY.

If your feature class storage data type is NOT ST_GEOMETRY, but you would like to query metadata views such as SDE.GDB_ITEMS_VW using SQLPlus or SQLDeveloper, then you will need to set up your geodatabase for ST_GEOMETRY.

If your feature class storage data type is NOT ST_GEOMETRY, and you are not interested in the above mentioned view, then you may not need to set up your geoadatabase for ST_GEOMETRY.

 

My recommendation would be to set up your geodatabase to work with ST_GEOMETRY anyways. In this way you do not have to worry about it down the road when you plan to work with one of the above features.

 

Geodatabase Setup

 

All the steps in the setup are done on the database server. You will need your DBA to perform these steps. The setup consists of 4 steps.

 

1. ESRI provides a library which is used for ST_GEOMETRY functions and operators with Oracle's external process. This library needs to be on the database server. The library can be found in any ArcGIS Desktop installation under Desktop 10.x\DatabaseSupport Folder. Choose the appropriate RDBMS\DatabaseServerOS directory and you should see a file (for example st_shaplib.dll for Windows OS or libst_shapelib.so for Linux OS). Move the library file to a directory of your choosing on the database server (in this example, I am going to use C:\Oracle\ArcGIS directory as the location of the library file).

     In addition, if your database OS is Windows, ensure that the 64-bit Visual C++ 2008 SP1 Redistributable Package is installed, if not, it can be downloaded from Microsoft.

 

2. Create a library object in the Oracle geodatabase as user SDE. This library object will point to the directory and file (full path) location of the library file located in step 1.

Extending our example from step 1, log into the geodatabase as SDE user and run the following command.

Create or replace library st_shapelib as 'C:\Oracle\ArcGIS\st_shapelib.dll' ;

     The name of the library object created in step 2 has to be st_shapelib.

 

3. Modify the appropriate Oracle files.

If your Oracle version is Oracle 11g and above, then you only need to modify the extproc.ora file located in the ORACLE_HOME\hs\admin directory. The extproc.ora file has a bunch of comments at the top. The line of interest is the last line which is uncommented. In a default state the last line reads

SET EXTPROC_DLLS=

Change this line to below for our example.

SET EXTPROC_DLLS=C:\\Oracle\\ArcGIS\\st_shapelib.dll

if you need to add more than one dll they need to separated by ; in windows and : in Unix/Linux. In windows the path works with \\ and in Unix/Linux the path works with / (I know that is not what the comments above in extproc.ora say, but trust me that is the only way I have gotten it to work)

 

If your Oracle version is Oracle 10g, then you do not have access to extproc.ora file (was not available in 10g). In this case you have to modify the tnsnames.ora and listener.ora files on the database server, so that the external calls can be made. Both these files are located in the ORACLE_HOME\network\admin directory.

 

Caution: Before making changes to the tnsnames.ora and listener.ora, please make back up copies of these files.

 

The tnsnames.ora file contains a directory of known database services (local and remote), one entry is specifically used by the local database server to use external function calls (to the extproc). This entry is labeled EXTPROC_CONNECTION_DATA (must always be labeled by this name). Add/modify the entry in tnsnames.ora

 

EXTPROC_CONNECTION_DATA =

(DESCRIPTION =

    (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(Key = EXTPROC1)))

    (CONNECT_DATA = (SID = PLSExtProc)(PRESENTATION = RO))

  )

 

The next step is to modify listener.ora. Add an entry in the SID_LIST_LISTENER (this is an example of a default listener named LISTENER. If your listener uses a different name then add the entry in its SID_LIST). My entire listener.ora looks like below

 

LISTENER =

  (ADDRESS_LIST =

       (ADDRESS= (PROTOCOL= IPC)(KEY = EXTPROC1))

       (ADDRESS= (PROTOCOL= TCP)(Host= hostname.domain.com>)(Port= 1521))

  )

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (SID_NAME = PLSExtProc)

      (ORACLE_HOME = C:\oracle\product\10.2\db_1)

      (PROGRAM = extproc)

      (ENVS="EXTPROC_DLLS=C:\Oracle\ArcGIS\st_shapelib.dll")

    )

  )

 

The KEY value in listener.ora should match the corresponding service entry in the tnsnames.ora.

 

Oracle listener files are highly configurable and have lots of additional options. All the possible options that can be configured in the listener.ora and tnsnames.ora are beyond the scope of this blog post. The listener,ora and tnsnames,ora on your database server may look very different from the simple configuration example mentioned above. Please refer to Oracle Database Net Services Administrator's Guide for details about configuring your listeners.

 

4. In case of Oracle 10g you will need to restart your listener so that the new configuration can be implemented. Creating library objects may cause some of the dependent objects to become invalid. To ensure all the objects in SDE schema are valid, it is a good idea to run recompile on SDE schema as the final step.

Log into the geodatabase as sys user and run the following command.

EXECUTE sys.utl_recomp.recomp_serial('SDE');

 

 

This completes the Oracle geodatabase setup to work with ST_GEOMETRY.

As all of you know that I do not blog very often. Maybe, I do not have much to say. With all the confusion lately about Oracle's CPU (stands for Critical Patch Updates), OPEN_CURSORS parameter and ArcMap crashing, lots of calls I have received and questions from my colleagues in Support team, I have decided to write about it. In this blog post, I will attempt to clarify them in the hope that it will help others.

 

I will start with a small history lesson to clarify the timeline. Oracle CPUs are released every quarter (4 times a year) and the idea behind this is that Oracle proactively patches issues, closes security holes, resolves bugs, etc. The periodic release of the patches helps DBAs to schedule the maintenance window, etc. With the release of Oct 2014 CPU, ESRI and consecutively our software was impacted. Following below are the details of how and why...in case you are interested. It is not merely academic as the knowledge of the problem always helps us understand it's resolution.

 

Prior to Oracle's Oct 2014 CPU patch, anyone could access (select) system parameter values. In this case the value of parameter OPEN_CURSORS. ESRI reads this parameter when it opens an ArcMap/ArcCatalog session to know what the value is set to. I am not sure what they use the value for, but can make an educated guess. I would explain my guess, but it is not relevant for the purposes of this post. Suffice it to say if the client session could not read this parameter, ArcMap/ArcCatalog session would close (without much explanation) resulting in what looked like ArcMap/ArcCatalog crashing.

 

Oracle figured that any user being able to read system parameters without explicit access to them posed a security risk. I can see their point and concur. You can do some damage if you know what all the parameters were. In an effort to close this security hole, Oracle disallowed select access to system parameters, to all regular users. If a user wanted to access system parameters, they would have to be given explicit access privileges. This security hole was closed in the October 2014 CPU patch (and every CPU patch following Oct 2014 CPU as Oracle CPU patches are cumulative).

As soon as our customers applied Oracle's October 2014 CPU patch, ArcMap/ArcCatalog sessions crashed, as they could not read the OPEN_CURSORS value and got an Oracle error "ORA-01031: insufficient privileges." This error was not readily visible from ArcMap/ArcCatalog GUI.

 

Clients complained to ESRI and after early diagnostics, finger pointing, etc. a workaround was devised. You can read all about the workaround in ESRI KB article 43293 . ESRI knew that this workaround would not be sufficient in the long run for security conscious DBAs and had to provide a different solution. Near the close of 2014, ESRI provided a patch  ArcGIS 'Oracle Critical Patch Update - October 2014' Connection Issue Patch  . You can read about this ArcGIS patch in the link provided. In essence what this patch did was that instead of looking for OPEN_CURSORS parameter in Oracle's system table, it looked for this value in SDE's SERVER_CONFIG table. Since this table belonged to SDE and ESRI had control of it, it did not have to depend on Oracle's security changes. To actually copy the value of OPEN_CURSORS from Oracle's system table to SDE's SERVER_CONFIG table, ESRI developed a KB article 43377 .

 

"All of this seems clear, so where is the confusion?" you ask.

 

I had to explain all the above and one important point, to point out all the confusion. If you read the KB 43377 (referenced above) closely, you will notice one critical point. I am going to put that in verbatim from the article so that I do not paraphrase.

 

"In summary, when an ArcGIS session connects to an Oracle instance, ArcGIS executes the dbms_utility.get_parameter_value subprogram, if an error is encountered (because the user connecting     lacks privileges) ArcGIS checks if the open_cursors parameter exists in the geodatabase server_config table and when not present, automatically sets the default to a value of 300."

 

This means that if you apply the ArcGIS connection issue patch (after you have applied the Oracle's Oct 2014 CPU patch or later), but do not follow the steps discussed in KB 43377 you could end up with an issue. Let me explain with an example, because to me an example always helps.

 

1) I am a GIS Administrator of Company XYZ. I have an Oracle database 11g and ArcGIS 10.2.X. Everything is fine in my land and my GIS users are happy. Life is good.

2) Let us assume that my Oracle OPEN_CURSORS parameter is set to 2000.

3) My Oracle DBA comes to me one day (in Jan 2015) and says that he is going to apply Oracle CPU to make the database current with security patches. The patch is installed on Friday evening.

4) Monday morning, none of the GIS users can connect and I am in panic. I hop online and remember what my DBA was going to do last weekend. I figure out the issue and apply ESRI's patch to resolve the issue (after testing). In the meantime I put in the workaround (with the help of my DBA) so that my GIS users can work.

5) I test and then install the patch in Production, but forget about KB article 43377 or the process is not successful due to permission issues.

6) Now I have an issue where my OPEN_CURSORS parameter for Oracle is set to 2000, but to ESRI it is set to 300. This is due to combination of step 5 and the important point from KB 43377 mentioned above.

 

I start getting intermittent issues, sometimes ArcMap session crashes, but I have no idea why. The process is not repeatable, but happens when I have many layers in my stored display or during reconcile/post/compress process. In effect, every time where more than 300 cursors are opened. Oracle does not give me an error (specifically an ORA- error) as I am still under the Oracle's limit. I am lost...

 

Hope this makes the picture clear.

 

To avoid issues like these, please make sure that after you have applied the ArcGIS patch and properly followed the procedure in KB 43377, you have OPEN_CUSRORS parameter present in SDE's SERVER_CONFIG table and it's value matches the value of OPEN_CURSORS parameter in Oracle.

 

I hope this blog post helps. Please let me know if you have any questions.

I recently visited Nevada Regional Transportation Commission (RTC) to help them upgrade their Fiber Manager. It was quickly apparent that the folks there knew their fiber optics equipment well. The project team got to visit a test patch location where they had various devices, traffic light, camera, and configurations set up for testing. While discussing the setup, the question of how to model what they were loosely calling a “gator patch” came up anytime the conversation came around to Fiber Manager.

 

The gator patch is actually a pre-terminating patchpanel, used to connect the fiber optic network to control switches at an intersection. The switches in this scenario are not end points, so in Fiber Manager modeling them as the devices that they are would not meet their needs. To ensure a trace goes through these locations and shows a drop at each switch would require a jumper to be set up or an additional feature to be added to the model that is not physically there in the field.

 

After discussing the possibilities of creating custom features, custom tables, and custom model names, it was suggested that the gator patch could just be modeled as a standard patchpanel. A patchpanel with the subtype “intersection” or “instersectionSwitch” only requires a new subtype to be defined for patchpanel. Though the concept of defining backside ports on a device with all ports on one side was uncomfortable to the engineers, the decision was made to use this model.


The RTC also uses Wavepoint for various workers out in the field to view and trace the facilities. Since the RTC manages the fiber optic facilities for multiple entities, security (who should have access to what features and data) plays a big role in their Wavepoint implementation. Because the RTC has so many users of Wavepoint, they opted to configure it for use with Windows authentication. They also need to allow different access via different roles, and the person managing these roles requires a quick and easy way to add and drop people from these roles.


The proposed and accepted solution was to use an XML authentication store for each running instance of Wavepoint. Currently the RTC has two databases and an instance of Wavepoint for each. In the future they plan to have multiple fiber optic data sets for each entity and a Wavepoint instance for each data set. Each Wavepoint instance will have an XML authentication store located in its app data file to manage the users allowed to access that part of the RTC-managed fiber optic facilities.


Implementing an XML authentication store in the Wavepoint configuration is similar to implementing Windows authentication. In the web.config file, a connect string needs to be defined pointing to the URL of the authentication store.

<connectionStrings>
<add name="FileBasedPolicyStore" connectionString="msxml://C:\Inetpub\wwwroot\Wavepoint\App_Data\TestAuthStore.xml"/>
…..



 

Then that connection string needs to be defined as the connection string name of the “WavepointRoles” provider under the role manager.

<roleManager enabled="true"
                 defaultProvider="WavepointRoles">
       <providers>
         <add name="WavepointRoles"
              type="System.Web.Security.AuthorizationStoreRoleProvider"
              description="Wavepoint test authorization store"
              connectionStringName="FileBasedPolicyStore"



 

This XML authentication store will give the RTC more flexibility and easier user management as they administer multiple Wavepoint instances across their network.

gp.jpg

Have you ever wished you could change your system and immediately know if said change had the desired effect?  Proper testing is paramount to system administration.  Whether it’s new functionality, code changes, or configuration changes—this usually necessitates full promotional-path testing.  Typical testing methods are mostly manual, which is time and resource intensive.  Well, I have good news for you, because implementing automated testing for your system may very well be the answer you seek!

 

The Proof is in the Pudding

 

Back in 2014, we had a sizable client (around 2 million customers) going through a significant ArcFM/Responder upgrade.  Our “ecosystem” was comprised of around 50 machines with several integration systems interacting.  In a previous upgrade, system changes took days to test, and even longer to have resources provide the data necessary to ascertain if performance targets were being hit and what may need further changing/testing.  Bug testing fared little better.  Clearly a new plan was needed for the next upgrade and Schneider Electric (SE) did not disappoint.

 

We began by harnessing the power of a continuous integration (CI) platform as a central controller, and utilized software that allowed for repeated automated actions to take place. Leveraging the control mechanisms inherently built into CI platforms, we were able to create one-click tests that did a variety of steps—actions that would typically be extremely time and/or resource intensive.

 

  • Stand Up Steps—putting the system in a “ready” state
    • This may seem trivial, but this includes restoring/restarting databases and VMs (over 50 machines), cleaning up hundreds of virtual test users, checking out and applying the latest code from subversion, and creating new virtual users that sit and wait to take actions.
  • Run Steps—running all necessary tests
    • Here we would start performance counters, begin a script that triggered hundreds of virtual users to execute on-screen actions, and start our test tools (these consisted of tools that would record, and simulate a storm, outages, and customer calls).
  • Tear Down Steps—stopping everything and collecting data
    • After a predetermined amount of time for a test (some as long as 24 hours), we needed to end our “action” scripts, collect all data accumulated throughout the test (this mostly consisted of performance and functionality metrics), and cleanup any outstanding virtual users and queues.

 

We were able to develop a system where a single resource could execute a test that, in some 20 steps, would accomplish the actions and responsibilities of hundreds.  Moreover the framework immediately provided test metrics--anything from system and database performance to functionality metrics.

 

A Wider Vision

 

The previous example, while focused more on performance testing than anything else, gave us a glimpse of what could truly be accomplished with the right mindset and technology.  Our clients have a magnifying glass on performance these days, and SE wants to be the resource utilized to help highlight issues and provide solutions.  The beginning of that is simply uncovering pain points in a more efficient manner—this means automation.

 

Many solutions exist regarding automation.  We feel that the proper approach is to maintain a control platform that can be totally customized to suit any testing needs.  Frankly, we have little option in the matter.  Our clients’ needs are extremely broad and often times are vastly different than one another based on size, geographic location, and government/business rules that dictate processes.  Remaining flexible in our approach allows us to provide utilities with a customized solution that fits perfectly into their system, taking into account all processes and integrations.

 

In a perfect world, we would gather all utilities, great and small, and, utilizing input from like-concerns, create a set of test scripts that can be universally applied and run by any client.  In this vein, we could implement like-automation across many clients.  With this approach, analysis of gathered metrics could universally benefit, not only client-side performance, but be brought back to our Product department for further R&D improvements.  While client Application Metrics (a separate aspect) are provided in 10.2.1b for our Product department, they are not nearly as all-encompassing as system tests.

 

Okay, but why?

 

This all sounds well and good, but some of you may not face the scaled challenges of a large utility. You might be reading this and thinking, “We don’t have testing or performance problems.”  While it is true that some of the smaller utilities do not face a multitude of performance or functional testing hurdles as your large utilities do, one must not discount the value of quick turnaround in any exchange.

 

Take a code change, for example.  Let’s say Small Utility Power (SUP) has a new functionality they need added to their customizations.  SUP and SE strike up a contract to have SE develop this new functionality which is then delivered to SUP.  Typically, regardless of size, a utility is not quick to rush some new functionality into production.  Likely, they will install in a lower environment, like a sandbox, dev, or QA, and then run a series of tests to determine if the desired functionality not only works, but adheres to intended design.

 

As this new functionality is promoted into production, there is typical back-and-forth regarding bug fixes or additional tweaks to the software.  This stage can be extremely time-consuming and can often be very difficult to identify what the problems are, or more frighteningly, what problems may come.  It is precisely at this stage where automation is critical.  With the introduction of automation, SUP could receive new installers, drop them in a folder and begin a test with a single click. Immediately, feedback could be provided just by looking at pass/fail criteria.  Moreover, this same pass/fail test-script could be used during development to make the project itself more efficient.  Could you imagine how much time you might save by running automatic functionality tests?

 

Our Internal Use

 

We don’t just believe in automation implementation because it is good for the utility industry, we implement automation in as many facets of our internal infrastructure as possible. For example, during our debug phases of a project or bug fix, we will often make code changes and have the need for rapid testing.  We can test on the fly within our development environment, but that doesn’t always capture the client’s experience.  It is imperative that we deploy our new installers to a “pure” test machine (a VM that mirrors a client’s environment). 

 

Normally, it would take a decent chunk of time to commit code changes, build installers, uninstall old installers from the VM, install new installers on the VM, open up programs, run the test, and analyze the results.  In fact, sometimes a single instance of this cycle could be a day or more.  With automation, we are able to commit a code change, and all the rest is done automatically—with the choice of running test scripts for further functionality testing.

 

A Call to Arms

 

Clearly, we can all get along without automation.  Sure, it’s nice, but is it imperative to your business?  For many, after demonstrating what automation can do for their ecosystem, the answer is yes.  If you could consolidate days to hours and hours to minutes, you’d be amazed at how many more cycles can be iterated through without further increasing resource cost.  Once this efficiency is injected into your business, it becomes impossible to live without.

 

Remember that “perfect world” I wrote of earlier in the post?  The cool thing is this can actually be achieved!  We are already working with some folks who are big on driving such changes, and together we truly do feel we can revolutionize how we all work together. Though it is true not all utilities experience the same problems, all utilities unite over commonalities in workflow. I urge you all to invest R&D into automation.  Be pioneers!

 

Schneider Electric is dedicated to building, deploying, and assisting in maintaining automated frameworks to suit very specific needs.  Remember, we are your partner and are driven by your success.  Whether it’s consulting or developing, SE can walk with you and your business into the future.  Together, we can improve this industry and revolutionize your processes!

 

Please contact your Sales Account Manager for further information.

Welcome to our new GIS Client Services space in exchange! While this space provides info on who we are, what we do, how we do it, and who we’ve done it for, the primary goal of this space is to engage our Client Services experts with you, our clients and partners.


Our mission in Client Services is quite simple: we want to partner with every utility and telecommunications company in the world so that they can leverage the power of the ArcFM Solution. One of the most rewarding experiences for our Client Services team is to witness this first hand. Therefore, our passion is to repeat that experience over and over again.

 

We were founded on four core values over 30 years ago:

  • Deliver tangible business value with our products and services that makes our customers’ work lives better
  • Foster valuable relationships with our clients and partners
  • Create a work environment where our people grow
  • Have fun; we take our work very seriously, but we don’t take ourselves too seriously

We live these core values every day. Engraining these in our culture is what drives our behavior of servant leadership.


Our strategy that we execute to accomplish our mission is summarized in four areas:

  • Utilize our industry proven project delivery methodology that was designed over twenty years ago, and has been continuously improved ever since
  • Recruit, motivate and constantly educate our highly talented, expert team members
  • Strategically and tactically leverage our vast network of experienced partners
  • Execute all work with a commitment to five key dimensions of customer service: (1) empathy, (2) expertise, (3) responsiveness, (4) accountability, and (5) quality


We measure our success by yours. We focus on key metrics in:

  • Customer satisfaction
  • Customer return on investment
  • Repeat service business
  • Support contract renewals
  • Reference customers
  • Customer testimonials

 

To that end, we have designed and built GIS Client Services  with our key goals of increasing transparency and bringing you closer to our services, mission, core values, strategy, delivery methodology, team members, expertise, experiences, success, failures, and lessons learned. This is the place for interaction and engagement between you, our clients, partners, and our Client Services team members. We want that interaction to provoke creative thinking, debate, confirmation, disagreement, questions, ideas, solutions, and relationships. It goes without saying (but I will say it anyway), your participation is critical.  Quite simply, we can't do it without you.


So two simple calls to action:

  1. Engage with our experts via this GIS Services blog
  2. Ask questions regarding all things GIS Client Services in this space. 

 

Thank you for your participation in this space. And thank you for your continued business, partnership, and loyalty to our people, products, and company.