The Northwest Territories Power Corporation (NTPC) will be updating its server infrastructure to continue towards the NTPC’s cybersecurity and evergreening program goals. To that end, the NTPC is looking for the following:
Large Servers:
NTPC is looking to procure a minimum of 5 physical servers, the following are the specifications. The intended use case is to run VMware vSphere as compute only, but possibly for a software defined virtual SAN that is hyperconverged (will not be VMware vSAN):
• All hardware without exception should be out-of-the-box on the VMware HCL
o Minimum for vSphere
o Ideally for vSAN ESA – note: NTPC will not be deploying VMware vSAN so this is not a requirement
• 2U Rackmount
o Must include Rack Rails (standard 4 post) and cable management arm
• 2 socket Intel platform – 8 cores/socket
o Preference towards high frequency processors
o UEFI System
• Discrete TPM 2.0 or better
• Dedicated OOB Remote Management
o Capable of full “lights out” remote management, install OS, configure hardware, remote KVM, etc.
o Must have capacity for 2FA (does not need to be pre-configured)
• Dual Power Supplies (hot swappable)
• Boot hard disks – dedicated RAID 1 – at least 400GB SSD (SAS/SATA/NVMe)
• NVMe Chassis
o 24 disk slots
o No disks required – but OK if you need to sell at least 1 with the system
-If you are including at least 1 Disk, it must be at least capable of 1 DWPD
o RAID Card capable of RAID 60, but also convertible to IT Mode via firmware
• Network Configuration
o Not additional, but dedicated OOB 1GigE Port as noted above
o Minimums
-2 x 1GigE RJ45 for vSphere management (OK for 10G SFP+)
-4 x 100GigE QSFP (connecting to existing Cisco 9500 series switches)
• Do not include optics or DAC’s
• Expansion
o At least 1 full height PCIe Gen 4 Slot
o At least 1 full height PCIe Gen 5 Slot
• No OS
• 5 years NBD Support
o NTPC understands that no vendors can deliver NBD support, instead a commitment to direct escalation to L3 technical support to expedite replacement parts is desired, and demonstrable.
• 1TB ECC RAM – expandable to 2TB
• Price per server should be quoted based upon a minimum of 5 servers being purchased “now”
o Vendor must also include individual server price that NTPC could expand
the order in increments of a single server to no more than 13 servers total.
o Price must be FOB Hay River, NT
Small Servers:
NTPC is looking to procure a minimum of 2 physically and spec-wise smaller servers than listed above, the following are the specifications. The intended use case is to run VMware vSphere as compute and local storage only. Note that NTPC, depending on pricing and its sole discretion, may increase the required quantity to 5 of these servers.
• All hardware without exception should be out-of-the-box on the VMware HCL
o Minimum for vSphere
• 1U Rackmount
o These servers should be ½ or 1/3 depth servers
o Must include at least standard 19” rack “ears”
These servers may be racked hanging vertically, so ears are required
-These servers may be racked in a wall mount rack
• Target either a 17.7” deep rack or a 14.5” deep rack
-Possible also these servers may be racked in a standard 2-post standard telco style rack
• Socket Intel platform – 8 cores/socket
o UEFI System
• Discrete TPM 2.0 or better
• Dedicated OOB Remote Management
o Capable of full lights out remote management, install OS, configure hardware, remote KVM, etc.
o OOB Management must have capacity for 2FA (does not need to be preconfigured)
• Dual Power Supplies – hot swappable
• Boot hard disks – dedicated RAID 1 – at least 400GB SSD (SAS/SATA/NVMe)
• Local Storage Capacity
o At least 2 SATA/SAS/NVMe drive bays (does not need to be hot swap)
-Ideally connected to a RAID controller capable of RAID 1, 10, 5
o SSD’s should be ~8TB raw Capacity
-Quote price per drive
o Indicate if system requires server-branded SSD’s, or if commodity enterprise SSD’s can be utilized
• Network Configuration
o Not additional, but dedicated OOB 1GigE Port as noted above
o Minimums:
-2 x 1GigE RJ45 for vSphere management (OK for 10G SFP+)
-2 x 10GigE SFP+
• Do not include optics or DAC’s
• Expansion
o At least 1 half-height PCIe Gen 4 Slot
• No OS
• 5 years NBD Support
o NTPC understands that no vendors can deliver NBD support, instead a commitment to direct escalation to L3 technical support to expedite replacement parts is desired, and demonstrable.
• 512GB ECC RAM
o Indicate what it can be expanded to - if at all
• Price-per-server should be quoted based upon a minimum of 2 servers being purchased “now”
o Vendor must also include individual server price that NTPC could expand the order in increments of a single server to no more than 8 servers total.
o Price must be FOB Hay River, NT
Storage:
NTPC is seeking to replace 3 EOL SAN’s (Compellent SC4050’s). The primary use case is block storage providing iSCSI targets only. NTPC has no Fiber Channel infrastructure, and is only interested in an IP-based solution that works natively with VMware. Additional features like volume replication, storage-based snapshots, vVOLs, compression, deduplication, etc., are neither required nor desired. The only use case is as VMware VMFS volumes. All replications, snapshots, backups and immutable storage are being provided natively elsewhere withing NTPC by VMWare, or Veeam. Any proposed solution must be natively supported by VMware and Veeam. If it is relevant to quotation, the 5-host cluster below is located in a data center in Hay River. The two 3-host clusters are located at that Jackfish Generation site in Yellowknife. At Jackfish, the two 3-host clusters are located in separate data centers physically separate on premises. NTPC has 3 VMware clusters. Each cluster will have 1 of these new storage platforms dedicated to it. There is one 5-host cluster, and two 3-host clusters. Each proposed storage platform must have the following dimensions:
• All Flash – no storage tiering
• Expose volumes as iSCSI targets
• Each storage platform must be able to deliver a minimum of 100TiB usable in support of its cluster
o This can either be a monolithic 100TiB volume, or
o In the case of a hyperconverged solution running as Virtual Machines with vSphere exposed disks (either pass-through RAID volumes, or pass-through
disks), there can be a maximum of 1 volume exposed per hosts in the cluster.
E.G:
-For a 3-host cluster, a maximum of 3 LUN’s exposed to all 3 hosts, comprising at least 33TiB/LUN
-For a 5-host cluster, a maximum of 5 LUN’s exposed to all 5 hosts, comprising at least 20TiB/LUN
• For traditional Physical SAN, a minimum of 100G multipath bandwidth per controller
o Must have at least 2 controllers
o Each controller must have a minimum total of 100G available
-Can be a single 100G/controller
-Can be 4 x 25G/controller
• For hyperconverged vSAN, connectivity must natively run/sync over a maximum of 2 x 100G ports
• Storage proposal must be 100TiB minimum raw available/usable
o Note that proposals that indicate “up to” 100TiB using any technologies like deduplication, compression, etc., will not be considered as a complete proposal
o If the proposal is including deduplication/compression, etc., those reference numbers should be based upon what may be achieved with a minimum of 100TiB raw usable
• Storage proposal must include details about expansion past 100TiB usable
o Are there any licensing requirements to unlock?
o Can disks simply be added and expanded, or are entire shelves of disks required to be provisioned?
o Do disks need to be a particular hardware brand to be functional?
• If proposing a hyperconverged vSAN, keep in mind this would be deployed onto a 2U server, entirely NVMe chassis, with RAID 60 capable controller, with 24 disk
slots available:
o Proposed number of disks and their dimensions required to get to 100TiB usable (you must include exact make/model of drive proposed)
o If proposal is preferred to utilize exposed local RAID LUN’s, propose disk layout, stripe size recommendations, etc.
o If proposal is directly exposed NVMe disks via an IT Mode controller, specify how the pass-through disks to the VM should be provisions, redundancy/rebuild overhead requirements, etc.
o Detail how data locality works with a 3-host and 5-host cluster, recommended numbers of volume replicas, and how maintenance procedures occur (both planned, and unplanned)
o Details on how multipath/discovery is configured across all LUN’s exposed to the vSphere hosts
• Details on if “at rest encryption” is available, and if there are any performance impacts to enabling that
• iSCSI LUN’s provisioned must have the ability to have individual CHAP credentials set by iSCSI target/initiator pair.
• Details on expected performance
o Sustained Read/Write in MB/sec throughput at 1M and 4k sequential and random data
o Sustained best case IOPS with proposed solution based upon 100G connectivity
o If a proposed HCI solution, expected worst case scenario RAM and CPU overhead
• 5 years NBD Support
o NTPC understands that no vendors can deliver NBD support, instead a commitment to direct escalation to L3 technical support to expedite replacement parts or 1 hour escalation to technical support is desired, and demonstrable.
• Everything must be quoted as FOB Hay River, NT
Switching
NTPC has recently completed an upgrade of its ToR switching, standardizing on Pair of Cisco C9500-32QC-E in two locations. NTPC wishes to add another pair of these same switches in a third location. In addition, NTPC wishes to purchase appropriate licensing that would unlock/enable VxLAN licensing for all 6 C9500’s it will own after this procurement has completed. Please quote on a pair of Cisco C9500-32QC-E switches, and licensing to cover 6 of these switches that would enable VxLAN.