Hi,
We have developped a online web stats system. It currently has about 5
millions transactions per month. A customer wondered if we could support as
much as 450 millions transactions a month.
Obviously, a single server with a 1.5 Ghz dual-core processor and 2 Go RAM
will crash under this workload. But what could support that kind of workload
?
Can SQL server basically support that? If so, what kind of hardware strategy
would be the best? Replication? One big fat server with 16 CPU and some
terabytes of RAM?
I'm looking for infos, thoughts, hints or articles to read about how to set
up a SQL server to support a huge workload.
Any idea?
Thanks
Stephane1) Yes, SQL Server can support 450M xactions per month. See www.tpc.org and
their tpc-c benchmark.
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp will show that sql
server (number 7 on the list) was able to scale to 1.2M transactions PER
MINUTE. Note that this was a $6M setup. However, 450M per month is about
10500 xactions/min running 24/7. Per this link -
http://www.tpc.org/tpcc/results/tpc...p?id=107031201, for
$63000 you can build a system that will process 70K xactions/min steady
state. The tpc-c transaction definition is fairly complex too.
2) Replication is usually not associated with performance. Just the
opposite.
3) You should hire an experienced person or company to spec, setup, install,
configure a mongo server for you. Make sure it is an entity that has
experience with very large systems. To do less will be wasting both time
and money and you will most likely still not get a performanct system.
4) Terabytes of RAM aren't achievable on sql server just yet. Tens+ of GB
is pretty much it AFAIK. What will really matter is the ability to get data
from disk (or the network for a web/streaming app) and into the CPUs VERY
quickly. You will also need a very optimized data structure, data access
mechanisms, index strategy and routine maintenance.
TheSQLGuru
President
Indicium Resources, Inc.
"Stephane" <Stephane@.discussions.microsoft.com> wrote in message
news:3712B71E-0A3A-40FA-929D-8C6074720204@.microsoft.com...
> Hi,
> We have developped a online web stats system. It currently has about 5
> millions transactions per month. A customer wondered if we could support
> as
> much as 450 millions transactions a month.
> Obviously, a single server with a 1.5 Ghz dual-core processor and 2 Go RAM
> will crash under this workload. But what could support that kind of
> workload?
> Can SQL server basically support that? If so, what kind of hardware
> strategy
> would be the best? Replication? One big fat server with 16 CPU and some
> terabytes of RAM?
> I'm looking for infos, thoughts, hints or articles to read about how to
> set
> up a SQL server to support a huge workload.
> Any idea?
> Thanks
> Stephane|||Or, you can setup an active/active cluster and layout the tables using
DPV's.
"Stephane" <Stephane@.discussions.microsoft.com> wrote in message
news:3712B71E-0A3A-40FA-929D-8C6074720204@.microsoft.com...
> Hi,
> We have developped a online web stats system. It currently has about 5
> millions transactions per month. A customer wondered if we could support
> as
> much as 450 millions transactions a month.
> Obviously, a single server with a 1.5 Ghz dual-core processor and 2 Go RAM
> will crash under this workload. But what could support that kind of
> workload?
> Can SQL server basically support that? If so, what kind of hardware
> strategy
> would be the best? Replication? One big fat server with 16 CPU and some
> terabytes of RAM?
> I'm looking for infos, thoughts, hints or articles to read about how to
> set
> up a SQL server to support a huge workload.
> Any idea?
> Thanks
> Stephane|||DPV's?
"Jay" wrote:
> Or, you can setup an active/active cluster and layout the tables using
> DPV's.
> "Stephane" <Stephane@.discussions.microsoft.com> wrote in message
> news:3712B71E-0A3A-40FA-929D-8C6074720204@.microsoft.com...
>
>|||Hi,
Your answer is really helpful. Thanks a lot.
As I can understand, for less than 50K$, it's not really possible to manage
that kind of workload.
What could be the maximum worload for a scale up server? Let's say, for less
than 10K$?
Thanks
Stephane
"TheSQLGuru" wrote:
> 1) Yes, SQL Server can support 450M xactions per month. See www.tpc.org a
nd
> their tpc-c benchmark.
> http://www.tpc.org/tpcc/results/tpcc_perf_results.asp will show that sql
> server (number 7 on the list) was able to scale to 1.2M transactions PER
> MINUTE. Note that this was a $6M setup. However, 450M per month is about
> 10500 xactions/min running 24/7. Per this link -
> http://www.tpc.org/tpcc/results/tpc...p?id=107031201, for
> $63000 you can build a system that will process 70K xactions/min steady
> state. The tpc-c transaction definition is fairly complex too.
> 2) Replication is usually not associated with performance. Just the
> opposite.
> 3) You should hire an experienced person or company to spec, setup, instal
l,
> configure a mongo server for you. Make sure it is an entity that has
> experience with very large systems. To do less will be wasting both time
> and money and you will most likely still not get a performanct system.
> 4) Terabytes of RAM aren't achievable on sql server just yet. Tens+ of GB
> is pretty much it AFAIK. What will really matter is the ability to get da
ta
> from disk (or the network for a web/streaming app) and into the CPUs VERY
> quickly. You will also need a very optimized data structure, data access
> mechanisms, index strategy and routine maintenance.
>
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Stephane" <Stephane@.discussions.microsoft.com> wrote in message
> news:3712B71E-0A3A-40FA-929D-8C6074720204@.microsoft.com...
>
>|||On Wed, 15 Aug 2007 16:08:01 -0700, Stephane
<Stephane@.discussions.microsoft.com> wrote:
>As I can understand, for less than 50K$, it's not really possible to manage
>that kind of workload.
>What could be the maximum worload for a scale up server? Let's say, for les
s
>than 10K$?
Only you have the most important source of information for those sorts
of question, the existing server. Analyze the load that 5M
transactions is placing on the server and how well it is dealing with
it. That should include analysis of peak load, not average over a
month. It should include analysis of CPU, memory and disk usage. Try
to identify the bottlenecks and estimate the headroom. Much of this
can be learned by using Performance Monitor. With that information
you can better know where to put your money - CPU or disk are the
obvious choices after loading up with memory.
You may also find that the application and database design need
tweaking. At higher transaction volumes minor issues can become major
ones, and a minor optimization can have a significant payback.
Roy Harvey
Beacon Falls, CT|||Distributed Partitioned Views.
You put part of a table on one server and another part on a different
server.
It's what you do to get a fully load balanced, fault tolerant (lost a
server) system.
One warning, it relies heavily on the DDL and is complicated to setup, but
with enough nodes, could support Google type traffic.
"Stephane" <Stephane@.discussions.microsoft.com> wrote in message
news:B0CC6354-66FB-4306-83B6-2B163F766CB5@.microsoft.com...[vbcol=seagreen]
> DPV's?
> "Jay" wrote:
>|||> You may also find that the application and database design need
> tweaking. At higher transaction volumes minor issues can become major
> ones, and a minor optimization can have a significant payback.
Truer words are seldom spoken!|||On Wed, 15 Aug 2007 10:34:04 -0700, Stephane
<Stephane@.discussions.microsoft.com> wrote:
>We have developped a online web stats system. It currently has about 5
>millions transactions per month. A customer wondered if we could support as
>much as 450 millions transactions a month.
Waitaminute, what do you mean by "stats system"? Does your server
*do* 5m trx/month, or does your server *analyze* the records of 5m
trx/month? If the later, that's a very different thing!
J.
No comments:
Post a Comment