Easy Comps - Web Present

Building a NetSuite Ecommerce Website with Keystone Data

Keystone Data produce high-performance, Ecommerce websites that integrate with a NetSuite system to provide an Ecommerce Platform without compromise. As an extension of Site Builder, Keystone Data can provide a complete, omnichannel experience that promotes directly accounting for online systems in one unified system. We believe Ecommerce should be seamless, and it’s time to support your customer’s interaction with NetSuite’s Order Management, Sales, Accounting and CRM capabilities.

Our design process and development experience allow us to offer clients advanced and bespoke website front-ends that are designed to be fit for purpose from the get go.

While our process is forever evolving, we utilise the following proven formula for supplying beautiful, high-performance websites:

Step 1 – The Creative Brief

The Creative Brief could be defined by either our experienced designer at Keystone Data, or supplied by our client with the consultation of their own designer. This stage is about getting to know your company, its personality and how you’d like to convey that through the website. Added to that, we like to explore elements of other websites that you may wish to incorporate or any must have features.

The Creative Brief may include:

  • The logos and branding concepts for the project.
  • Examples of websites to specify features for the website.
  • Examples of design elements or websites with a look or feel the client wishes to incorporate into the website.
  • Notes relating to the target market/demographic for the website

Step 2 – Website Design and Specification

At Keystone Data we like to make sure we are making the best possible website for our client, and a big part of that is working cohesively with the client to produce extensive designs and detailed specifications. We believe that in drawing up great plans, the customer can start to visualise what can be achieved and explore the functionality to aim for proactively, rather than in hindsight.

Design

The Website Design stage utilises the design parameters established in the Creative Brief to create a rich mock-up of each key page within the website.

The outcome of this phase is the creation of a Photoshop Design for each key page within the website. Images are often the easiest way to convey ideas and the client can evaluate at an early stage the websites potential appearance.

The Designs will form a key part of the documentation, which can be referred to as a specification of website features, as well as the look and feel.

During this stage Keystone Data present ideas to the client, collaborate on any potential changes and ensure the designs are signed off ready for development.

Specification

At Keystone Data, we ensure the Specification is thorough and accessible for the client to make sure the build is developed to exacting standards. In the specification stage we annotate the website designs to explore how the website will function.

The specification will be provided as a rich document detailing how the underlying NetSuite products, categories and website content will be displayed on the website and edited in NetSuite.

Step 3 – Website Development

The Website Development stage involves Keystone Data using modern technologies to build the website itself. Using our own bespoke Ecommerce platform, we are able to ensure that the website is built with NetSuite in mind; offering a platform that far exceeds the possibilities available when integrating with an external Ecommerce system.

Step 4 – Initial Website testing

During this phase Keystone Data will set up example products within NetSuite’s Inventory and website content, so as the website closely matches the initial designs. This allows us to mimic the functionality defined in the specification and test the processes in preparation for our clients to begin managing the backend.

At this stage we like to present the functionality of the website to our client and ensure the build is surpassing expectations.

Step 5 – Website Refinement and Population of Data

Within the Website Refinement stage, the client will begin to populate the website with their own data and complete the website testing under tutelage from Keystone Data. We have produced extensive guides and documentation that adds to NetSuite’s own support to ensure the system is accessible and manageable.

With an open line of communication, our clients can feedback to us on any issues or discuss any future developments.

Step 6 – Website Go Live Pre-Testing

Going live with a web store can be a daunting undertaking. At Keystone Data, we test the build at every stage to promote reliability but prepare the client for Go Live with a Test Plan prepared for all eventualities.

Testing isn’t just limited to the frontend however, with the full order system tested in NetSuite too.

Keystone Data work to provide the best possible solutions for all our clients, and so while this process gives an insight into what you could expect when building a website with us, our exhaustive approach to NetSuite means that may just be the start.

The process above focuses on the front-end implementation of an Ecommerce website with an existing NetSuite system, however at Keystone Data we specialise in NetSuite implementations that coincide with delivery of an Ecommerce platform too. For more information, please don’t hesitate in contacting Keystone Data today.

 

300x198xiStock_000018196272Small-300x198.jpg.pagespeed.ic.KQ25RkOYrU

SuiteScript 2.0 with 2015.2

Version 2015 Release 2 introduces the beta version of SuiteScript 2.0, a complete re-factor of the SuiteScript model.The introduction of SuiteScript 2.0 does not change how SuiteScript 1.0 scripts are written. There is only one SuiteScript 2.0 enhancement that impacts SuiteScript 1.0 – the enhancement to the Script record creation process.

Advantages to SuiteScript 2.0

  • Modular Architecture
  • Modern JavaScript Syntax and Behaviour
  • New and Improved API Functionality
  • Asynchronous Client Side Processing (Promises)
  • New Batch Processing Framework (Map/Reduce Script Type)

Modular Architecture

SuiteScript 2.0 is built on modularity. With it, you have access to a complete set of new APIs, contained within modules. These modules are organized based on behaviour. SuiteScript 2.0 also enables you to create your own custom modules. You can use these custom modules to organize helper functions (as a replacement for SuiteScript 1.0 libraries). Additionally, you can add custom modules to SuiteApps and expose those modules to third parties.

Dependency Management and Improved Performance

SuiteScript 2.0 gives you built-in dependency management. It also gives you improved performance. With SuiteScript 2.0, you define the dependencies that must load prior to module execution. This means that you are required to load only those modules that are needed. Because you are not loading all available dependencies at one time (as you do in SuiteScript 1.0), your script loads faster. Plus, when possible, required dependencies are loaded asynchronously in client-side scripts.

Modern JavaScript Syntax and Behaviour

The underlying design principle of this new version of SuiteScript is that SuiteScript 2.0 is JavaScript. SuiteScript 2.0 scripts are clean and intuitive.

SuiteScript 2.0 is modelled to look and behave like modern JavaScript. To facilitate that objective, SuiteScript 2.0 methods and objects are not prefixed with nlapi and nlobj.

This change also reflects the modular organization of SuiteScript 2.0. SuiteScript 1.0 methods and objects respectively belong to the nlapi and nlobj namespaces. SuiteScript 2.0 methods and objects are encapsulated within various modules.

Properties and Enumerations

SuiteScript 2.0 adopts the usage of properties and enumerations. Most SuiteScript 1.0 getter and setter methods are replaced with properties. Enumerations encapsulate common constants (for example, standard record types).

Updated Sublist Indexing

The standard practice in the development world is to start indexing at 0. This behaviour is observed in the majority of programming languages. To bring SuiteScript into alignment with modern JavaScript, sublist indexing within SuiteScript 2.0 begins at 0.

Consistent Behaviour

  • All SuiteScript 2.0 methods take a plain JavaScript object as an input. Within the SuiteScript 2.0 API, all method inputs are named options. For example, the record.load signature is listed as record.load(options).
  • All SuiteScript 2.0 Booleans take a value of true or false. All other Boolean values (for example: T or F) throw an error.
  • Parameter types in SuiteScript 2.0 are strictly adhered to. You must pass in valid parameter types, as listed in the SuiteScript 2.0 help. SuiteScript 2.0 does not convert invalid parameter values to valid values.

New and Improved API Functionality

SuiteScript 2.0 includes the following new functionality.

Expanded Support for HTTP Content Type Headers

SuiteScript 2.0 adds support for most HTTP content types.

New Encryption/Encoding Functionality

SuiteScript 2.0 adds enhanced encryption, decryption, encoding, and hashing functionality.

Asynchronous Client Side Processing (Promises)

With SuiteScript, synchronous processing is especially important in client remote object scripts. These are client scripts that make a call to the NetSuite server to create, load, edit, submit, or delete an object. You can use nested callback functions to increase performance and efficiency, but this method is confusing when dealing with complex operations. The end result is often code that is difficult to read.

In SuiteScript 2.0, all client scripts now support the use of promises. With promises, developers can write asynchronous code that is intuitive and efficient. SuiteScript 2.0 provides promise APIs for select modules (see SuiteScript 2.0 Promise APIs). In addition, you can create custom promises in all modules that support client scripting.

A promise is a JavaScript object that represents the eventual result of an asynchronous process. After this object is created, it serves as a placeholder for the future success or failure of the operation. During the period of time that the promise object is waiting, the remaining segments of the script can execute.

A promise holds one of the following values:

  • fulfilled – The operation is successful.
  • rejected – The operation failed.
  • pending – The operation is still in progress and has not yet been fulfilled or rejected.

When it is first created, a promise holds the value pending. After the associated process is complete (from success or failure), the value changes to fulfilled or rejected. A success or failure callback function attached to the promise is called when the process is complete. Note that a promise can only succeed or fail one time. When the value of the promise updates to fulfilled or rejected, it cannot change.

SuiteScript 2.0 Promise APIs

To make promises more accessible, SuiteScript 2.0 provides client-side promise APIs for the following modules:

  • email
  • error
  • http
  • https
  • record
  • search
  • transaction

The available promise APIs are named so that they correspond with their synchronous counterparts. The distinction is that the promise APIs have names that are suffixed with .promise. For example, the search.create(options) API has a promise version named search.create.promise(options).

New Batch Processing Framework (Map/Reduce Script Type)

Map/reduce is a programming model that enables large amounts of data to be efficiently batch processed. The initial task of parsing the data is broken up into several categories of smaller tasks. In general (not specific to SuiteScript), the basic flow of map/reduce is as follows:

  • Take in a large data set (INPUT).
  • Parse the data into key: value pairs (MAP).
  • Group values based on keys.
  • Evaluate the data in each group (REDUCE).
  • Output the results (OUTPUT).

SuiteScript 2.0 introduces a new script type based on the map/reduce model. Map/reduce scripts provide a structured framework for batch processing scripts. In addition to batch processing, SuiteCloud Plus users can also use map/reduce scripts to process records in parallel across multiple work queues. This script type replaces the parallel processing functionality utilized in SuiteScript 1.0 scheduled scripts. Users manually select the number of work queues to utilize from the script deployment record.

Note: The map/reduce script type automatically tracks governance usage and yielding.

Map/Reduce is a new server-side script type that batch processes large data sets. It goes through at least three of five possible stages when a script is executed.

Important: A map/reduce script is not required to go through both the Map and Reduce stages. One of these stages can be skipped, but the script must go through one of them.

The stages are processed in the following order.

  • Get Input Data – Takes in the original data set. This stage is always processed first and is required. When the data is processed across multiple queues, this stage runs sequentially.
  • Map – Parses data into key: value pairs. If this stage is skipped, the Reduce stage is required. When the data is processed across multiple queues, this stage runs in parallel.
  • Shuffle – Groups values based on keys. This stage is automatically processed after the Map stage is processed. When the data is processed across multiple queues, this stage runs sequentially.
  • Reduce – Evaluates the data in each group. If this stage is skipped, the Map stage is required. When the data is processed across multiple queues, this stage runs in parallel.
  • Summarize – Summarizes the metadata of the task. This stage is optional and is not technically a part of the map/reduce process. When the data is processed across multiple queues, this stage runs sequentially.

With the map/reduce script type, SuiteScript 2.0 also introduces a new map/reduce API. The map/reduce API includes four entry point functions that control the script’s flow into the stages listed above.

  • getInputData() – Starts the map/reduce process. Takes the script into the Get Input Data stage and returns an array. This entry point function is required.
  • map() – Takes the script into the Map and Shuffle stages and returns key: value pairs. Note that the Shuffle stage is automatically processed after the Map stage is processed. If this entry point function is skipped, reduce() is required.
  • reduce() – Takes the script into the Reduce stage and then outputs the result. If this entry point function is skipped, map() is required.
  • summarize() – Takes the script into the Summarize stage. This entry point function is optional.

The script record for map/reduce scripts is similar to the script record for scheduled scripts.

Deployment is also handled in the same manner. Both script types have three options for deployment: by schedule, from the Save and Execute option on the deployment record, or through the task module.

systems int

When it comes to systems integration – keep it simple

Sometime when clients ask us to integrate NetSuite with another package they ask for a tight integration because they believe this will ensure that using the resulting integrated solution will be easier for their users.

For our technical staff it is sometimes tempting to want to undertake a such an integration as it may be more interesting to develop. But from my experience I would always suggest to a client that they keep any integration as simple as possible for a number of reasons.

Firstly the more complex an integration, the more difficult it is to define and design a solution that delivers what users believe they want and what is realistically achievable.

Secondly it follows that it is hard to develop and test and therefore to create a reliable solution. Devising the sort of testing necessary for the users to do is tricky, therefore it is likely that problems will emerge during live running.

Thirdly a more complex integration is more difficult to maintain and any problems in live running may prove hard to analyse and even harder to correct and re-test.

Fourthly a tightly integrated solution (for example where one package is adjusted to use the other package’s product file) may well have problems when either package is updated. At least whenever a new version is introduced, the integrating solution provider will need to consider the potential for issues.

Fifthly the thorny issue of determining the cause of a problem is exacerbated by tight integration. I always think it a good idea for a client to consider who will be responsible for determining what element in the system is causing the problem and have a support contract in place with that party which specifies this role. It is also a good idea to have all other supporting parties agree to respect the ‘problem identifier’ (with all necessary caveats). Many clients do not appreciate that this role is required and can be time consuming. They may be reluctant to pay for such a service, but without it the user client (probably without the necessary technical knowledge or skill) often has to spend time negotiating between suppliers to prove where a problem lies and establish responsibility.

In summary, I suggest that for most integrations the apparent time saved in user tasks by a tight integration is likely to be lost in the time involved in accepting the specifications, testing and accepting the more complex solution plus in time lost due to greater system downtime and discussion with suppliers about problems experienced during live running or when a new version of software is made available.

iStock_000007735146Small

Systems Integration – Things to consider

For most organisations there is no one software application product that meets all its needs. Even the most sophisticated and comprehensive ERP solution is likely to miss out on required functionality and often the best solution is to select an additional specialist or specific best of breed software application to be integrated with the ERP.

Having made such a decision the question is raised – what sort of integration is required? The tightness of any integration can be defined as of three broad levels:

• Level 1 – data is transferred between systems at given times
• Level 2 – two or more databases are synchronised regularly
• Level 3 – two or more applications share a common database

For some solutions a combination of levels is appropriate. For example, if integrating an eCommerce application with a generalised sales order processing and fulfilment package, one might use level 3 integration to share the stock and/or customer database, but level 1 integration for passing order details between the systems.

It might seem that level 3 is always best, but this is not the case and there are arguments for and against each degree of tightness.The actual choice made in any circumstance will be driven by a number of factors:

• What the organisation needs or wants to achieve;
• How important it is for the integration to be to maintain a real time view of data in both systems;
• How quickly the organisation needs/wants the solution;
• How open the application packages are in terms of publishing details, maintaining consistent published interfaces and advising developers and customers about significant technical changes made to their product;
• What is technically possible given the application packages to be integrated (often level 3 integration is not possible unless a solution is being developed – including an eCommerce solution);
• The budget or cost/benefit case.

One important issue that arises from this list is to make ‘openness’ a criteria when selecting an ERP solution, which can also be applied to the selection of any application package. Even if at the outset there is no requirement for integration, it is quite likely that during the lifetime use of such an application (five to ten years) some kind of integration will be necessary or desirable.