起步软件技术论坛-X3

 找回密码
 立即注册
搜索
查看: 337|回复: 2

[分享]服务器端调优相关内容**

[复制链接]
发表于 2007-2-26 14:20:09 | 显示全部楼层 |阅读模式
Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
==========================
Skip to Content Java Solaris Communities My SDN Account Join SDN
» search tips      APIs Downloads Technologies Products Support Training Sun.com Developers Home > Products & Technologies > Java Technology > J2EE > Reference > Documentation >   



Documentation
Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine
     Print-friendly Version






Tuning Garbage Collection
with the 5.0 JavaTM Virtual Machine See also Performance Docs





Table of Contents

1 Introduction

2 Ergonomics

3 Generations

3.1 Performance Considerations

3.2 Measurement

4 Sizing the Generations

4.1 Total Heap

4.2 The Young Generation

4.2.1 Young Generation Guarantee

5 Types of Collectors

5.1 When to Use the Throughput Collector

5.2 The Throughput Collector

5.2.1 Generations in the throughput collector

5.2.2 Ergonomics in the throughput collector

5.2.2.1 Priority of goals

5.2.2.2 Adjusting Generation Sizes

5.2.2.3 Heap Size

5.2.3 Out-of-Memory Exceptions

5.2.4 Measurements with the Throughput Collector

5.3 When to Use the Concurrent Low Pause Collector

5.4 The Concurrent Low Pause Collector

5.4.1 Overhead of Concurrency

5.4.2 Young Generation Guarantee

5.4.3 Full Collections

5.4.4 Floating Garbage

5.4.5 Pauses

5.4.6 Concurrent Phases

5.4.7 Scheduling a collection

5.4.8 Scheduling pauses

5.4.9 Incremental mode


    5.4.9.1 Command line


    5.4.9.2 Recommended Options for i-cms


    5.4.9.3 Basic Troubleshooting


5.4.10 Measurements with the Concurrent Collector

6 Other Considerations

7 Conclusion

8 Other Documentation

8.1 Example of Output

8.2 Frequently Asked Questions




Introduction
The JavaTM 2 Platform Standard Edition (J2SETM platform) is used for a wide variety of applications from small applets on desktops to web services on large servers. In the J2SE platform version 1.4.2 there were four garbage collectors from which to choose but without an explicit choice by the user the serial garbage collector was always chosen. In version 5.0 the choice of the collector is based on the class of the machine on which the application is started.

This “smarter choice” of the garbage collector is generally better but is not always the best. For the user who wants to make their own choice of garbage collectors, this document will provide information on which to base that choice. This will first include the general features of the garbage collections and tuning options to take the best advantage of those features. The examples are given in the context of the serial, stop-the-world collector. Then specific features of the other collectors will be discussed along with factors that should considered when choosing one of the other collectors.

When does the choice of a garbage collector matter to the user? For many applications it doesn't. That is, the application can perform within its specifications in the presence of garbage collection with pauses of modest frequency and duration. An example where this is not the case (when the serial collector is used) would be a large application that scales well to large number of threads, processors, sockets, and a large amount of memory.

Amdahl observed that most workloads cannot be perfectly parallelized; some portion is always sequential and does not benefit from parallelism. This is also true for the J2SE platform. In particular, virtual machines for the JavaTM platform up to and including version 1.3.1 do not have parallel garbage collection, so the impact of garbage collection on a multiprocessor system grows relative to an otherwise parallel application.

The graph below models an ideal system that is perfectly scalable with the exception of garbage collection. The red line is an application spending only 1% of the time in garbage collection on a uniprocessor system. This translates to more than a 20% loss in throughput on 32 processor systems. At 10% of the time in garbage collection (not considered an outrageous amount of time in garbage collection in uniprocessor applications) more than 75% of throughput is lost when scaling up to 32 processors.



This shows that negligible speed issues when developing on small systems may become principal bottlenecks when scaling up to large systems. However, small improvements in reducing such a bottleneck can produce large gains in performance. For a sufficiently large system it becomes well worthwhile to choose the right garbage collector and to tune it if necessary.

The serial collector will be adequate for the majority of applications. Each of the other collectors have some added overhead and/or complexity which is the price for specialized behavior. If the application doesn't need the specialized behavior of an alternate collector, use the serial collector. An example of a situation where the serial collector is not expected to be the best choice is a large application that is heavily threaded and run on hardware with a large amount of memory and a large number of processors. For such applications, we now make the choice of the throughput collector (see the discussion of ergonomics in section 2).

This document was written using the J2SE Platform version 1.5, on the SolarisTM Operating System (SPARC(R) Platform Edition) as the base platform, because it provides the most scalable hardware and software for the J2SE platform. However, the descriptive text applies to other supported platforms, including Linux, Microsoft Windows, and the Solaris Operating System (x86 Platform Edition), to the extent that scalable hardware is available. Although command line options are consistent across platforms, some platforms may have defaults different than those described here.

Ergonomics
New in the J2SE Platform version 1.5 is a feature referred to here as ergonomics. The goal of ergonomics is to provide good performance from the JVM with a minimum of command line tuning. Ergonomics attempts to match the best selection of

Garbage collector

Heap size

Runtime compiler

for an application. This selection assumes that the class of the machine on which the application is run is a hint as to the characteristics of the application (i.e., large applications run on large machines). In addition to these selections is a simplified way of tuning garbage collection. With the throughput collector the user can specify goals for a maximum pause time and a desired throughput for an application. This is in contrast to specifying the size of the heap that is needed for good performance. This is intended to particularly improve the performance of large applications that use large heaps. The more general ergonomics is described in the document entitled “Ergonomics in the 1.5 Java Virtual Machine”. It is recommended that the ergonomics as presented in this latter document be tried before using the more detailed controls explained in this document.

Included in this document under the throughput collector are the ergonomics features that are provided as part of the new adaptive size policy. This includes the new options to specify goals for the performance of garbage collection and additional options to fine tune that performance.


Generations
One strength of the J2SE platform is that it shields the developer from the complexity of memory allocation and garbage collection. However, once garbage collection is the principal bottleneck, it is worth understanding some aspects of this hidden implementation. Garbage collectors make assumptions about the way applications use objects, and these are reflected in tunable parameters that can be adjusted for improved performance without sacrificing the power of the abstraction.

An object is considered garbage when it can no longer be reached from any pointer in the running program. The most straightforward garbage collection algorithms simply iterate over every reachable object. Any objects left over are then considered garbage. The time this approach takes is proportional to the number of live objects, which is prohibitive for large applications maintaining lots of live data.

Beginning with the J2SE Platform version 1.2, the virtual machine incorporated a number of different garbage collection algorithms that are combined using generational collection. While naive garbage collection examines every live object in the heap, generational collection exploits several empirically observed properties of most applications to avoid extra work.

The most important of these observed properties is infant mortality. The blue area in the diagram below is a typical distribution for the lifetimes of objects. The X axis is object lifetimes measured in bytes allocated. The byte count on the Y axis is the total bytes in objects with the corresponding lifetime. The sharp peak at the left represents objects that can be reclaimed (i.e., have "died") shortly after being allocated. Iterator objects, for example, are often alive for the duration of a single loop.







Some objects do live longer, and so the distribution stretches out to the the right. For instance, there are typically some objects allocated at initialization that live until the process exits. Between these two extremes are objects that live for the duration of some intermediate computation, seen here as the lump to the right of the infant mortality peak. Some applications have very different looking distributions, but a surprisingly large number possess this general shape. Efficient collection is made possible by focusing on the fact that a majority of objects "die young".

To optimize for this scenario, memory is managed in generations, or memory pools holding objects of different ages. Garbage collection occurs in each generation when the generation fills up. Objects are allocated in a generation for younger objects or the young generation, and because of infant mortality most objects die there. When the young generation fills up it causes a minor collection. Minor collections can be optimized assuming a high infant mortality rate. The costs of such collections are, to the first order, proportional to the number of live objects being collected. A young generation full of dead objects is collected very quickly. Some surviving objects are moved to a tenured generation. When the tenured generation needs to be collected there is a major collection that is often much slower because it involves all live objects.

The diagram below shows minor collections occurring at intervals long enough to allow many of the objects to die between collections. It is well-tuned in the sense that the young generation is large enough (and thus the period between minor collections long enough) that the minor collection can take advantage of the high infant mortality rate. This situation can be upset by applications with unusual lifetime distributions, or by poorly sized generations that cause collections to occur before objects have had time to die.

As noted in section 2 ergonomics nows makes different choice of the garbage collector in order to provide good performance on a variety of applications. The serial garbage collector is meant to be used by small applications. Its default parameters were designed to be effective for most small applications. The throughput garbage collector is meant to be used by large applications. The heap size parameters selected by ergonomics plus the features of the adaptive size policy are meant to provide good performance for server applications. These choices work well for many applications but do not always work. This leads to the central tenet of this document:

If the garbage collector has become a bottleneck, you may wish to customize the generation sizes. Check the verbose garbage collector output, and then explore the sensitivity of your individual performance metric to the garbage collector parameters.










The default arrangement of generations (for all collectors with the exception of the throughput collector) looks something like this.



At initialization, a maximum address space is virtually reserved but not allocated to physical memory unless it is needed. The complete address space reserved for object memory can be divided into the young and tenured generations.

The young generation consists of eden plus two survivor spaces . Objects are initially allocated in eden. One survivor space is empty at any time, and serves as a destination of the next, copying collection of any live objects in eden and the other survivor space. Objects are copied between survivor spaces in this way until they are old enough to be tenured, or copied to the tenured generation.

Other virtual machines, including the production virtual machine for the J2SE Platform version 1.2 for the Solaris Operating System, used two equally sized spaces for copying rather than one large eden plus two small spaces. This means the options for sizing the young generation are not directly comparable; see the Performance FAQ for an example.

A third generation closely related to the tenured generation is the permanent generation. The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation.





3.1 Performance Considerations
There are two primary measures of garbage collection performance. Throughput is the percentage of total time not spent in garbage collection, considered over long periods of time. Throughput includes time spent in allocation (but tuning for speed of allocation is generally not needed.) Pauses are the times when an application appears unresponsive because garbage collection is occurring.

Users have different requirements of garbage collection. For example, some consider the right metric for a web server to be throughput, since pauses during garbage collection may be tolerable, or simply obscured by network latencies. However, in an interactive graphics program even short pauses may negatively affect the user experience.

Some users are sensitive to other considerations. Footprint is the working set of a process, measured in pages and cache lines. On systems with limited physical memory or many processes, footprint may dictate scalability. Promptness is the time between when an object becomes dead and when the memory becomes available, an important consideration for distributed systems, including remote method invocation (RMI).

In general, a particular generation sizing chooses a trade-off between these considerations. For example, a very large young generation may maximize throughput, but does so at the expense of footprint, promptness, and pause times. young generation pauses can be minimized by using a small young generation at the expense of throughput. To a first approximation, the sizing of one generation does not affect the collection frequency and pause times for another generation.

There is no one right way to size generations. The best choice is determined by the way the application uses memory as well as user requirements. For this reason the virtual machine's choice of a garbage collectior are not always optimal, and may be overridden by the user in the form of command line options, described below.

3.2 Measurement
Throughput and footprint are best measured using metrics particular to the application. For example, throughput of a web server may be tested using a client load generator, while footprint of the server might be measured on the Solaris Operating System using the pmap command. On the other hand, pauses due to garbage collection are easily estimated by inspecting the diagnostic output of the virtual machine itself.

The command line argument -verbose:gc prints information at every collection. Note that the format of the -verbose:gc output is subject to change between releases of the J2SE platform. For example, here is output from a large server application:

  [GC 325407K->83000K(776768K), 0.2300771 secs]
  [GC 325816K->83372K(776768K), 0.2454258 secs]
  [Full GC 267628K->83769K(776768K), 1.8479984 secs]

Here we see two minor collections and one major one. The numbers before and after the arrow

325407K->83000K (in the first line)




indicate the combined size of live objects before and after garbage collection, respectively. After minor collections the count includes objects that aren't necessarily alive but can't be reclaimed, either because they are directly alive, or because they are within or referenced from the tenured generation. The number in parenthesis

(776768K)(in the first line)




is the total available space, not counting the space in the permanent generation, which is the total heap minus one of the survivor spaces. The minor collection took about a quarter of a second.

0.2300771 secs (in the first line)

The format for the major collection in the third line is similar. The flag -XX:+PrintGCDetails prints additional information about the collections. The additional information printed with this flag is liable to change with each version of the virtual machine. The additional output with the -XX:+PrintGCDetails flag in particular changes with the needs of the development of the Java Virtual Machine. An example of the output with -XX:+PrintGCDetails for the J2SE Platform version 1.5 using the serial garbage collector is shown here.

[GC [DefNew: 64575K->959K(64576K), 0.0457646 secs] 196016K->133633K(261184K), 0.0459067 secs]]

indicates that the minor collection recovered about 98% of the young generation,

DefNew: 64575K->959K(64576K)

and took about 46 milliseconds.

0.0457646 secs

The usage of the entire heap was reduced to about 51%

196016K->133633K(261184K)

and that there was some slight additional overhead for the collection (over and above the collection of the young generation) as indicated by the final time:

0.0459067 secs

The flag -XX:+PrintGCTimeStamps will additionally print a time stamp at the start of each collection.

111.042: [GC 111.042: [DefNew: 8128K->8128K(8128K), 0.0000505 secs]111.042: [Tenured: 18154K->2311K(24576K), 0.1290354 secs] 26282K->2311K(32704K), 0.1293306 secs]

The collection starts about 111 seconds into the execution of the application. The minor collection starts at about the same time. Additionally the information is shown for a major collection delineated by Tenured. The tenured generation usage was reduced to about 10%

18154K->2311K(24576K)

and took about .13 seconds.

0.1290354 secs

Sizing the Generations
A number of parameters affect generation size. The following diagram illustrates the difference between committed space and virtual space in the heap. At initialization of the virtual machine, the entire space for the heap is reserved. The size of the space reserved can be specified with the -Xmx option. If the value of the -Xms parameter is smaller than the value of the -Xmx parameter, not all of the space that is reserved is immediately committed to the virtual machine. The uncommitted space is labeled "virtual" in this figure. The different parts of the heap (permanent generation, tenured generation, and young generation) can grow to the limit of the virtual space as needed.

Some of the parameters are ratios of one part of the heap to another. For example the parameter NewRatio denotes the relative size of the tenured generation to the young generation. These parameters are discussed below.



The discussion that follows regarding the growing and shrinking of the heap does not apply to the throughput collector. The resizing of the heap for the throughput collector is governed by the ergonomics discussed in section 5.2.2. The parameters that control the total size of the heap and the sizes of the generations do apply to the throughput collector.

4.1 Total Heap
Since collections occur when generations fill up, throughput is inversely proportional to the amount of memory available. Total available memory is the most important factor affecting garbage collection performance.

By default, the virtual machine grows or shrinks the heap at each collection to try to keep the proportion of free space to live objects at each collection within a specific range. This target range is set as a percentage by the parameters -XX:MinHeapFreeRatio=<minimum> and -XX:MaxHeapFreeRatio=<maximum>, and the total size is bounded below by -Xms and above by -Xmx . The default parameters for the 32-bit Solaris Operating System (SPARC Platform Edition) are shown in this table:




-XX:MinHeapFreeRatio=
40



-XX:MaxHeapFreeRatio=
70

-Xms
3670k

-Xmx
64m





Default values of heap size parameters on 64-bit systems have been scaled up by approximately 30%. This increase is meant to compensate for the larger size of objects on a 64-bit system.

With these parameters if the percent of free space in a generation falls below 40%, the size of the generation will be expanded so as to have 40% of the space free, assuming the size of the generation has not already reached its limit. Similarly, if the percent of free space exceeds 70%, the size of the generation will be shrunk so as to have only 70% of the space free as long as shrinking the generation does not decrease it below the minimum size of the generation.

Large server applications often experience two problems with these defaults. One is slow startup, because the initial heap is small and must be resized over many major collections. A more pressing problem is that the default maximum heap size is unreasonably small for most server applications. The rules of thumb for server applications are:







Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.

Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.

Be sure to increase the memory as you increase the number of processors, since allocation can be parallelized.






A description of other virtual machine options can be found at

http://java.sun.com/docs/hotspot/VMOptions.html




4.2 The Young Generation
The second most influential knob is the proportion of the heap dedicated to the young generation. The bigger the young generation, the less often minor collections occur. However, for a bounded heap size a larger young generation implies a smaller tenured generation, which will increase the frequency of major collections. The optimal choice depends on the lifetime distribution of the objects allocated by the application.

By default, the young generation size is controlled by NewRatio. For example, setting -XX:NewRatio=3 means that the ratio between the young and tenured generation is 1:3. In other words, the combined size of the eden and survivor spaces will be one fourth of the total heap size.

The parameters NewSize and MaxNewSize bound the young generation size from below and above. Setting these equal to one another fixes the young generation, just as setting -Xms and -Xmx equal fixes the total heap size. This is useful for tuning the young generation at a finer granularity than the integral multiples allowed by NewRatio.

4.2.1 Young Generation Guarantee
In an ideal minor collection the live objects are copied from one part of the young generation (the eden space plus the first survivor space) to another part of the young generation (the second survivor space). However, there is no guarantee that all the live objects will fit into the second survivor space. To ensure that the minor collection can complete even if all the objects are live, enough free memory must be reserved in the tenured generation to accommodate all the live objects. In the worst case, this reserved memory is equal to the size of eden plus the objects in non-empty survivor space. When there isn't enough memory available in the tenured generation for this worst case, a major collection will occur instead. This policy is fine for small applications, because the memory reserved in the tenured generation is typically only virtually committed but not actually used. But for applications needing the largest possible heap, an eden bigger than half the virtually committed size of the heap is useless: only major collections would occur. Note that the young generation guarantee applies only to serial collector . The throughput collector and the concurrent collector will proceed with a young generation collection, and if the tenured generation cannot accommodate all the promotions from the young generation, both generations are collected.





If desired, the parameter SurvivorRatio can be used to tune the size of the survivor spaces, but this is often not as important for performance. For example, -XX:SurvivorRatio=6 sets the ratio between each survivor space and eden to be 1:6. In other words, each survivor space will be one eighth of the young generation (not one seventh, because there are two survivor spaces).

If survivor spaces are too small, copying collection overflows directly into the tenured generation. If survivor spaces are too large, they will be uselessly empty. At each garbage collection the virtual machine chooses a threshold number of times an object can be copied before it is tenured. This threshold is chosen to keep the survivors half full. An option, -XX:+PrintTenuringDistribution, can be used to show this threshold and the ages of objects in the new generation. It is also useful for observing the lifetime distribution of an application.

Here are the default values for the 32-bit Solaris Operating System (SPARC Platform Edition):




NewRatio
2 (client JVM: 8)



NewSize
2228k

MaxNewSize
Not limited

SurvivorRatio
32






The maximum size of the young generation will be calculated from the maximum size of the total heap and NewRatio. The "not limited" default value for MaxNewSize means that the calculated value is not limited by MaxNewSize unless a value for MaxNewSize is specified on the command line.

The rules of thumb for server applications are:

First decide the total amount of memory you can afford to give the virtual machine. Then graph your own performance metric against young generation sizes to find the best setting.

Unless you find problems with excessive major collection or pause times, grant plenty of memory to the young generation.

Increasing the young generation becomes counterproductive at half the total heap or less (whenever the young generation guarantee cannot be met).

Be sure to increase the young generation as you increase the number of processors, since allocation can be parallelized.









Types of Collectors
The discussion to this point has been about the serial collector. In the J2SE Platform version 1.5 there are three additional collectors. Each is a generational collector which has been implemented to emphasize the throughput of the application or low garbage collection pause times.

The throughput collector: this collector uses a parallel version of the young generation collector. It is used if the -XX:+UseParallelGC option is passed on the command line. The tenured generation collector is the same as the serial collector.

The concurrent low pause collector: this collector is used if the -Xincgc&#8482; or -XX:+UseConcMarkSweepGC is passed on the command line. The concurrent collector is used to collect the tenured generation and does most of the collection concurrently with the execution of the application. The application is paused for short periods during the collection. A parallel version of the young generation copying collector is used with the concurrent collector. The concurrent low pause collector is used if the option -XX:+UseConcMarkSweepGC is passed on the command line.

The incremental (sometimes called train) low pause collector: this collector is used only if -XX:+UseTrainGC is passed on the command line. This collector has not changed since the J2SE Platform version 1.4.2 and is currently not under active development. It will not be supported in future releases. Please see the 1.4.2 GC Tuning Document for information on this collector.

Note that -XX:+UseParallelGC should not be used with -XX:+UseConcMarkSweepGC. The argument parsing in the J2SE Platform starting with version 1.4.2 should only allow legal combinations of command line options for garbage collectors, but earlier releases may not detect all illegal combinations and the results for illegal combinations are unpredictable.

Always try the collector chosen by the JVM on your application before explicitly selecting another collector. Tune the heap size for your application and then consider what requirements of your application are not being met. Based on the latter, consider using one of the other collectors.

5.1 When to Use the Throughput Collector
Use the throughput collector when you want to improve the performance of your application with larger numbers of processors. In the serial collector garbage collection is done by one thread, and therefore garbage collection adds to the serial execution time of the application. The throughput collector uses multiple threads to execute a minor collection and so reduces the serial execution time of the application. A typical situation is one in which the application has a large number of threads allocating objects. In such an application it is often the case that a large young generation is needed.

5.2 The Throughput Collector
The throughput collector is a generational collector similar to the serial collector but with multiple threads used to do the minor collection. The major collections are essentially the same as with the serial collector. By default on a host with N CPUs, the throughput collector uses N garbage collector threads in the minor collection. The number of garbage collector threads can be controlled with a command line option (see below). On a host with 1 CPU the throughput collector will likely not perform as well as the serial collector because of the additional overhead for the parallel execution (e.g., synchronization costs). On a host with 2 CPUs the throughput collector generally performs as well as the serial garbage collector and a reduction in the minor garbage collector pause times can be expected on hosts with more than 2 CPUs.

The throughput collector can be enabled by using command line flag -XX:+UseParallelGC. The number of garbage collector threads can be controlled with the ParallelGCThreads command line option (-XXarallelGCThreads=<desired number>). If explicit tuning of the heap is being done with command line flags the size of the heap needed for good performance with the throughput collector is to first order the same as needed with the serial collector. Turning on the throughput collector should just make the minor collection pauses shorter. Because there are multiple garbage collector threads participating in the minor collection there is a small possibility of fragmentation due to promotions from the young generation to the tenured generation during the collection. Each garbage collection thread reserves a part of the tenured generation for promotions and the division of the available space into these "promotion buffers" can cause a fragmentation effect. Reducing the number of garbage collector threads will reduce this fragmentation effect as will increasing the size of the tenured generation.5.

5.2.1 Generations in the throughput collector
As mentioned earlier the arrangement of the generations is different in the throughput collector. That arrangement is shown in the figure below.










5.2.2 Ergonomics in the throughput collector
In the J2SE Platform version 1.5 the throughput collector will be chosen as the garbage collector on server class machines. The document Ergonomics in the 5 Java Virtual Machine discusses this selection of the garbage collector. For the throughput collector a new method of tuning has been added which is based on a desired behavior of the application with respect to garbage collection. The following command line flags can be used to specify the desired behavior in terms of goals for the maximum pause time and the throughput for the application.

The maximum pause time goals is specified with the command line flag

-XX:MaxGCPauseMillis=<nnn>

This is interpreted as a hint to the throughput collector that pause times of <nnn> milliseconds or less are desired. By default there is no maximum pause time goal. The throughput collector will adjust the Java heap size and other garbage collection related parameters in an attempt to keep garbage collection pauses shorter than <nnn> milliseconds. These adjustments may cause the garbage collector to reduce overall throughput of the application and in some cases the desired pause time goal cannot be met. By default no maximum pause time goal is set.




The throughput goal is measured in terms of the time spent doing garbage collection and the time spent outside of garbage collection (referred to as application time). The goal is specified by the command line flag

-XX:GCTimeRatio=<nnn>

The ratio of garbage collection time to application time is




1 / (1 + <nnn>)

For example -XX:GCTimeRatio=19 sets a goal of 5% of the total time for garbage collection. By default the goal for total time for garbage collection is 1%.

Additionally, as an implicit goal the throughput collector will try to met the other goals in the smallest heap that it can.

5.2.2.1 Priority of goals
The goals are addressed in the following order

Maximum pause time goal

Throughput goal

Minimum footprint goal

The maximum pause time goal is met first. Only after it is met is the throughput goal addressed. Similarly, only after the first two goals have been met is the footprint goal considered.

5.2.2.2 Adjusting Generation Sizes
The statistics (e.g., average pause time) kept by the collector are updated at the end of a collection. The tests to determine if the goals have been met are then made and any needed adjustments to the size of a generation is made. The exception is that explicit garbage collections (calls to System.gc()) are ignored in terms of keeping statistics and making adjustments to the sizes of generations.

Growing and shrinking the size of a generation is done by increments that are a fixed percentage of the size of the generation. A generation steps up or down toward its desired size. Growing and shrinking are done at different rates. By default a generation grows in increments of 20% and shrinks in increments of 5%. The percentage for growing is controlled by the command line flag -XX:YoungGenerationSizeIncrement=<nnn > for the young generation and -XX:TenuredGenerationSizeIncrement=<nnn> for the tenured generation. The percentage by which a generation shrinks is adjusted by the command line flag -XX: AdaptiveSizeDecrementScaleFactor=<nnn >. If the size of an increment for growing is XXX percent, the size of the decrement for shrinking will be XXX / nnn percent.

If the collector decides to grow a generation at startup, there is a supplemental percentage added to the increment. This supplement decays with the number of collections and there is no long term affect of this supplement. The intent of the supplement is to increase startup performance. There is no supplement to the percentage for shrinking.

If the maximum pause time goal is not being met, the size of only one generation is shrunk at a time. If the pause times of both generations are above the goal, the size of the generation with the larger pause time is shrunk first.

If the throughput goal is not being met, the sizes of both generations are increased. Each is increased in proportion to its respective contribution to the total garbage collection time. For example, if the garbage collection time of the young generation is 25% of the total collection time and if a full increment of the young generation would be by 20%, then the young generation would be increased by 5%.

5.2.2.3 Heap Size
If not otherwise set on the command line, the sizes of the initial heap and maximum heap are calculated based on the size of the physical memory. If phys_mem is the size of the physical memory on the platform, the initial heap size will be set to phys_mem / DefaultInitialRAMFraction. DefaultInitialRAMFraction is a command line option with a default value of 64. Similarly the maximum heap size will be set to phys_mem / DefaultMaxRAM. DefaultMaxRAMFraction has a default value of 4.

5.2.3 Out-of-Memory Exceptions
The throughput collector will throw an out-of-memory exception if too much time is being spent doing garbage collection. For example, if the JVM is spending more than 98% of the total time doing garbage collection and is recovering less than 2% of the heap, it will throw an out-of-memory expection. The implementation of this feature has changed in 1.5. The policy is the same but there may be slight differences in behavior due to the new implementation.

5.2.4 Measurements with the Throughput Collector
The verbose garbage collector output is the same for the throughput collector as with the serial collector.

5.3 When to Use the Concurrent Low Pause Collector
Use the concurrent low pause collector if your application would benefit from shorter garbage collector pauses and can afford to share processor resources with the garbage collector when the application is running. Typically applications which have a relatively large set of long-lived data (a large tenured generation), and run on machines with two or more processors tend to benefit from the use of this collector. However, this collector should be considered for any application with a low pause time requirement. Optimal results have been observed for interactive applications with tenured generations of a modest size on a single processor.

5.4 The Concurrent Low Pause Collector
The concurrent low pause collector is a generational collector similar to the serial collector. The tenured generation is collected concurrently with this collector.

This collector attempts to reduce the pause times needed to collect the tenured generation. It uses a separate garbage collector thread to do parts of the major collection concurrently with the applications threads. The concurrent collector is enabled with the command line option -XX:+UseConcMarkSweepGC. For each major collection the concurrent collector will pause all the application threads for a brief period at the beginning of the collection and toward the middle of the collection. The second pause tends to be the longer of the two pauses and multiple threads are used to do the collection work during that pause. The remainder of the collection is done with a garbage collector thread that runs concurrently with the application. The minor collections are done in a manner similar to the serial collector although multiple threads are used to do the collection. See "arallel Minor Collection Options with the Concurrent Collector" below for information on using multiple threads with the concurrent low pause collector.

The techniques used in the concurrent collector (for the collection of the tenured generation) are described at:

http://research.sun.com/techrep/2000/abstract-88.html

5.4.1 Overhead of Concurrency
The concurrent collector trades processor resources (which would otherwise be available to the application) for shorter major collection pause times. The concurrent part of the collection is done by a single garbage collection thread. On an N processor system when the concurrent part of the collection is running, it will be using 1/Nth of the available processor power. On a uniprocessor machine it would be fortuitous if it provided any advantage (see the section on Incremental mode for the exception to this statement). The concurrent collector also has some additional overhead costs that will take away from the throughput of the applications, and some inherent disadvantages (e.g., fragmentation) for some types of applications. On a two processor machine there is a processor available for applications threads while the concurrent part of the collection is running, so running the concurrent garbage collector thread does not "pause" the application. There may be reduced pause times as intended for the concurrent collector but again less processor resources are available to the application and some slowdown of the application should be expected. As N increases, the reduction in processor resources due to the running of the concurrent garbage collector thread becomes less, and the advantages of the concurrent collector become more.

5.4.2Young Generation Guarantee
Prior to J2SE Platform version 1.5 the concurrent collector had to satisfy the young generation guarantee just as the serial collector does. Starting with J2SE Platform version 1.5 this is no longer true. The concurrent collector can recover if it starts a young generation collection and there is not enough space in the tenured generation to hold all the objects that require promotion from the young generation. This is similar to the throughput collector.

5.4.3 Full Collections
The concurrent collector uses a single garbage collector thread that runs simultaneously with the application threads with the goal of completing the collection of the tenured generation before it becomes full. In normal operation, the concurrent collector is able to do most of its work with the application threads still running, so only brief pauses are seen by the application threads. As a fall back, if the concurrent collector is unable to finish before the tenured generation fills up, the application is paused and the collection is completed with all the application threads stopped. Such collections with the application stopped are referred to as full collections and are a sign that some adjustments need to be made to the concurrent collection parameters.

5.4.4 Floating Garbage
A garbage collector works to find the live objects in the heap. Because application threads and the garbage collector thread run concurrently during a major collection, objects that are found to be alive by the garbage collector thread may become dead by the time collection finishes. Such objects are referred to as floating garbage. The amount of floating garbage depends on the length of the concurrent collection (more time for the applications threads to discard an object) and on the particulars of the application. As a rough rule of thumb try increasing the size of the tenured generation by 20% to account for the floating garbage. Floating garbage is collected at the next garbage collection.

5.4.5 Pauses
The concurrent collector pauses an application twice during a concurrent collection cycle. The first pause is to mark as live the objects directly reachable from the roots (e.g., objects on thread stack, static objects and so on) and elsewhere in the heap (e.g., the young generation). This first pause is referred to as the initial mark. The second pause comes at the end of the marking phase and finds objects that were missed during the concurrent marking phase due to the concurrent execution of the application threads. The second pause is referred to as the remark.

5.4.6 Concurrent Phases
The concurrent marking occurs between the initial mark and the remark. During the concurrent marking the concurrent garbage collector thread is executing and using processor resources that would otherwise be available to the application. After the remark there is a concurrent sweeping phase which collects the dead objects. During this phase the concurrent garbage collector thread is again taking processor resources from the application. After the sweeping phase the concurrent collector sleeps until the start of the next major collection.

5.4.7 Scheduling a collection
With the serial collector a major collection is started when the tenured generation becomes full and all application threads are stopped while the collection is done. In contrast a concurrent collection should be started at a time such that the collection can finish before the tenured generation becomes full. There are several ways a concurrent collection can be started.

The concurrent collector keeps statistics on the time remaining before the tenured generation is full (T-until-full) and on the time needed to do a concurrent collection (T-collect). When the T-until-full approaches T-collect, a concurrent collection is started. This test is appropriately padded so as to start a collection conservatively early.

A concurrent collection will also start if the occupancy of the tenured generation grows above the initiating occupancy (i.e., the percentage of the current heap that is used before a concurrent collection is started). The initiating occupancy by default is set to about 68%. It can be set with the parameter CMSInitiatingOccupancyFraction which can be set on the command line with the flag

-XX:CMSInitiatingOccupancyFraction=<nn>

The value <nn> is a percentage of the current tenured generation size.

5.4.8 Scheduling pauses
The pauses for the young generation collection and the tenured generation collection occur independently. They cannot overlap, but they can occur in quick succession such that the pause from one collection immediately followed by one from the other collection can appear to be a single, longer pause. To avoid this the remark pauses for a concurrent collection are scheduled to be midway between the previous and next young generation pauses. The initial mark pause is typically too short to be worth scheduling.

5.4.9 Incremental mode
The concurrent collector can be used in a mode in which the concurrent phases are done incrementally. Recall that during a concurrent phase the garbage collector thread is using a processor. The incremental mode is meant to lessen the impact of long concurrent phases by periodically stopping the concurrent phase to yield back the processor to the application. This mode (referred to here as “i-cms”) divides the work done by concurrently by the collector into small chunks of time which are scheduled between young generation collections. This feature is useful when applications that need the low pause times provided by the concurrent collector are run on machines with small numbers of processors (e.g., 1 or 2).

The concurrent collection cycle typically includes the following steps:

stop all application threads; do the initial mark; resume all application threads

do the concurrent mark (uses one procesor for the concurrent work)

do the concurrent pre-clean (uses one processor for the concurrent work)

stop all application threads; do the remark; resume all application threads

do the concurrent sweep (uses one processor for the concurrent work)

do the concurrent reset (uses one processor for the concurrent work)

Normally, the concurrent collector uses one processor for the concurrent work for the entire concurrent mark phase, without (voluntarily) relinquishing it. Similarly, one processor is used for the entire concurrent sweep phase, again without relinquishing it. This processor utilization can be too much of a disruption for applications with pause time constraints, particularly when run on systems with just one or two processors. i-cms solves this problem by breaking up the concurrent phases into short bursts of activity, which are scheduled to occur mid-way between minor pauses.

I-cms uses a "duty cycle" to control the amount of work the concurrent collector is allowed to do before voluntarily giving up the processor. The duty cycle is the percentage of time between young generation collections that the concurrent collector is allowed to run. I-cms can automatically compute the duty cycle based on the behavior of the application (the recommended method), or the duty cycle can be set to a fixed value on the command line.

5.4.9.1 Command line

The following command-line options control i-cms (see below for recommendations for an initial set of options):

-XX:+CMSIncrementalMode default: disabled

This flag enables the incremental mode. Note that the concurrent collector must be enabled (with -XX:+UseConcMarkSweepGC) for this option to work.

-XX:+CMSIncrementalPacing default: disabled

This flag enables automatic adjustment of the incremental mode duty cycle based on statistics collected while the JVM is running.

-XX:CMSIncrementalDutyCycle=<N> default: 50

This is the percentage (0-100) of time between minor collections that the concurrent collector is allowed to run. If CMSIncrementalPacing is enabled, then this is just the initial value.

-XX:CMSIncrementalDutyCycleMin=<N> default: 10

This is the percentage (0-100) which is the lower bound on the duty cycle when CMSIncrementalPacing is enabled.

-XX:CMSIncrementalSafetyFactor=<N> default: 10

This is the percentage (0-100) used to add conservatism when computing the duty cycle.

-XX:CMSIncrementalOffset=<N> default: 0

This is the percentage (0-100) by which the incremental mode duty cycle is shifted to the right within the period between minor collections.

-XX:CMSExpAvgFactor=<N> default: 25

This is the percentage (0-100) used to weight the current sample when computing exponential averages for the concurrent collection statistics.

5.4.9.2 Recommended Options for i-cms

When trying i-cms, we recommend the following as an initial set of command line options:

-XX:+UseConcMarkSweepGC \

-XX:+CMSIncrementalMode \

-XX:+CMSIncrementalPacing \

-XX:CMSIncrementalDutyCycleMin=0 \

-XX:+CMSIncrementalDutyCycle=10 \

-XX:+PrintGCDetails \

-XX:+PrintGCTimeStamps \

-XX:-TraceClassUnloading




The first three options enable the concurrent collector, i-cms, and i-cms automatic pacing. The next two set the minimum duty cycle to 0 and the initial duty cycle to 10, since the default values (10 and 50, respectively) are too large for a number of applications. The last three options cause diagnostic information on the collection to be written to stdout, so that the behavior of i-cms can be seen and later analyzed.

5.4.9.3 Basic Troubleshooting

The i-cms automatic pacing feature uses statistics gathered while the program is running to compute a duty cycle so that concurrent collections complete before the heap becomes full. However, past behavior is not a perfect predictor of future behavior and the estimates may not always be accurate enough to prevent the heap from becoming full. If too many full collections occur, try the following steps, one at a time:

Increase the safety factor:

-XX:CMSIncrementalSafetyFactor=<N>

Increase the minimum duty cycle:

-XX:CMSIncrementalDutyCycleMin=<N>

Disable automatic pacing and use a fixed duty cycle:

-XX:-CMSIncrementalPacing -XX:CMSIncrementalDutyCycle=<N>

5.4.10 Measurements with the Concurrent Collector
Below is output for -verbose:gc with -XX:+PrintGCDetails (some details have been removed). Note that the output for the concurrent collector is interspersed with the output from the minor collections. Typically many minor collections will occur during a concurrent collection cycle. The CMS-initial-mark: indicates the start of the concurrent collection cycle. The CMS-concurrent-mark: indicates the end of the concurrent marking phase and CMS-concurrent-sweep: marks the end of the concurrent sweeping phase. Not discussed before is the precleaning phase indicated by CMS-concurrent-preclean:. Precleaning represents work that can be done concurrently and is in preparation for the remark phase CMS-remark. The final phase is indicated by the CMS-concurrent-reset: and is in preparation for the next concurrent collection.

[GC [1 CMS-initial-mark: 13991K(20288K)] 14103K(22400K), 0.0023781 secs]

[GC [DefNew: 2112K->64K(2112K), 0.0837052 secs] 16103K->15476K(22400K), 0.0838519 secs]

...

[GC [DefNew: 2077K->63K(2112K), 0.0126205 secs] 17552K->15855K(22400K), 0.0127482 secs]

[CMS-concurrent-mark: 0.267/0.374 secs]

[GC [DefNew: 2111K->64K(2112K), 0.0190851 secs] 17903K->16154K(22400K), 0.0191903 secs]

[CMS-concurrent-preclean: 0.044/0.064 secs]

[GC[1 CMS-remark: 16090K(20288K)] 17242K(22400K), 0.0210460 secs]

[GC [DefNew: 2112K->63K(2112K), 0.0716116 secs] 18177K->17382K(22400K), 0.0718204 secs]

[GC [DefNew: 2111K->63K(2112K), 0.0830392 secs] 19363K->18757K(22400K), 0.0832943 secs]

...

[GC [DefNew: 2111K->0K(2112K), 0.0035190 secs] 17527K->15479K(22400K), 0.0036052 secs]

[CMS-concurrent-sweep: 0.291/0.662 secs]

[GC [DefNew: 2048K->0K(2112K), 0.0013347 secs] 17527K->15479K(27912K), 0.0014231 secs]

[CMS-concurrent-reset: 0.016/0.016 secs]

[GC [DefNew: 2048K->1K(2112K), 0.0013936 secs] 17527K->15479K(27912K), 0.0014814 secs]





The initial mark pause is typically short relative to the minor collection pause time. The times of the concurrent phases (concurrent mark, concurrent precleaning, and concurrent sweep) may be relatively long (as in the example above) when compared to a minor collection pause but the application is not paused during the concurrent phases. The remark pause is affected by the specifics of the application (e.g., a higher rate of modifying objects can increase this pause) and the time since the last minor collection (i.e., more objects in the young generation may increase this pause).




Other Considerations
For most applications the permanent generation is not relevant to garbage collector performance. However, some applications dynamically generate and load many classes. For instance, some implementations of JSPTM pages do this. If necessary, the maximum permanent generation size can be increased with MaxPermSize.

Some applications interact with garbage collection by using finalization and weak/soft/phantom references. These features can create performance artifacts at the Java programming language level. An example of this is relying on finalization to close file descriptors, which makes an external resource (descriptors) dependent on garbage collection promptness. Relying on garbage collection to manage resources other than memory is almost always a bad idea.

Another way applications can interact with garbage collection is by invoking full garbage collections explicitly, such as through the System.gc() call. These calls force major collection, and inhibit scalability on large systems. The performance impact of explicit garbage collections can be measured by disabling explicit garbage collections using the flag -XX:+DisableExplicitGC.

One of the most commonly encountered uses of explicit garbage collection occurs with RMI's distributed garbage collection (DGC). Applications using RMI refer to objects in other virtual machines. Garbage can't be collected in these distributed applications without occasional local collection, so RMI forces periodic full collection. The frequency of these collections can be controlled with properties. For example,

java -Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000 ...

specifies explicit collection once per hour instead of the default rate of once per minute. However, this may also cause some objects to take much longer to be reclaimed. These properties can be set as high as Long.MAX_VALUE to make the time between explicit collections effectively infinite, if there is no desire for an upper bound on the timeliness of DGC activity.


The Solaris 8 Operating System supports an alternate version of libthread that binds threads to light-weight processes (LWPs) directly. Some applications can benefit greatly from the use of the alternate libthread. This is a potential benefit for any threaded application. To try this, set the environment variable LD_LIBRARY_PATH to include /usr/lib/lwp before launching the virtual machine. The alternate libthread is the default libthread in the Solaris 9 Operating System.

Soft references are cleared less aggressively in the server virtual machine than the client. The rate of clearing can be slowed by increasing the parameter SoftRefLRUPolicyMSPerMB with the command line flag -XX:SoftRefLRUPolicyMSPerMB=10000. SoftRefLRUPolicyMSPerMB is a measure of the time that a soft reference survives for a given amount of free space in the heap. The default value is 1000 ms per megabyte. This can be read to mean that a soft reference will survive (after the last strong reference to the object has been collected) for 1 second for each megabyte of free space in the heap. This is very approximate.





Conclusion
Garbage collection can become a bottleneck in different applications depending on the requirements of the applications. By understanding the requirements of the application and the garbage collection options, it is possible to minimize the impact of garbage collection.

Other Documentation
8.1 Example of Output
The GC output examples document contains examples for different types of garbage collector behavior. The examples show the diagnostic output from the garbage collector and explain how to recognize various problems. Examples from different collectors are included.

8.2 Frequently Asked Questions
A FAQ is included that contains answers to specific questions. The level of detail in the FAQ is generally greater than in this tuning document.

As used on the web site, the terms "Java Virtual Machine" and "JVM" mean a virtual machine for the Java platform.





Copyright &copy; 2003 Sun Microsystems, Inc. All Rights Reserved.
Please send comments to: gc feedback













About Sun  |  About This Site  |  Newsletters  |  Contact Us  |  Employment
How to Buy  |  Licensing  |  Terms of Use  |  Privacy  |  Trademarks



Copyright Sun Microsystems, Inc.   A Sun Developer Network Site

Unless otherwise licensed, code in all technical manuals herein (including articles, FAQs, samples) is provided under this License.

Content Feeds
回复

使用道具 举报

 楼主| 发表于 2007-2-26 14:22:06 | 显示全部楼层
http://wiki.jboss.org/wiki/Wiki.jsp?page=JBossASTuningSliming
==================================================
Tuning and Slimming JBossAS
based on JBoss 3.2.6


REVISED FOR 4.0.4+ JBoss4Slimming


Preface

This advice is primarily on how to tune and/or slim JBossAS. The two concepts are orthogonal in most cases. While reducing idle service threads through slimming won't have a large impact on performance, using less memory and resources may allow you to tune other performance aspects. Of course this does reduce startup time. Furthermore, as a general security concept -- remove services you don't use. We will separate the two categories: slimming and tuning. We start by using the default configuration and trimming from there (for clustering that will be the topic of a later wiki page ;-) ). This advice does not involve areas of tuning that crosscut developer and administrative roles (application tuning such as cache sizes). This is primarily advice for administrative tuning.

Note for those concerned that this advice will make a technically non-J2EE-compliant instance of JBoss (3.2.6 is not compliant anyhow) as removing key J2EE services would cause JBoss to fail the TCK. Most performance tuning/administrative tasks done in real-world installations technically fall in this category.

Assume that you have copied the server/default directory and its subdirectories to server/slim.


Tuning


Java Virtual Machine


Tune VM Garbage Collection or Tune JDK 5 Garbage Collection for your machine and memory size
Use JRockit on x86 hardware
Use a 64 bit machine and 64 bit VM so that you can use large heap sizes, larger than 2-4GB typically. 64 bit support is available on all recent SPARC/Solaris boxes running Solaris 9 or later, Itanium with JDK 1.4, or JDK 5 on Linux x64.
DO NOT USE -d64 (64-bit) if you do not use ABOVE the maximum 32 bit heap space (2-4 GB of heap). Using 64 bit addressing requires MORE memory to do the same amount of work and provides no advantage for applications that do not need this much memory.
Avoid extra large heaps but avoid extra small heaps. (We cannot tell you what qualifies because it depends on what you're doing). This affects generational garbage collection and total time to scan the heap. It is difficult to tune a small heap effectively (even if your app only uses 200MB, if you're using parallel garbage collection + CMS, then you're going to need well above 512MB). Oversize heaps spend needless time scanning memory for garbage collection.
Avoid the Sun 1.4 VM. JDK 5 is VASTLY better especially in the area of garbage collection and JRockit is great on x86.
Use the -server option but use either of -XX:ThreadStackSize=128k (Solaris) or -Xss128k (every other platform). On Solaris -Xss128k does nothing (you can only set LARGER thread stack sizes). This allows you to create more threads by using less memory per thread but might result in blown stacks with extremely recursive code. Still, 128k stack is still nothing to shake a stick at.
You really need to understand generational garbage collectors to tune this properly and you really have to do load testing (OpenSTA?, JMeter, etc) to know for sure.
You really should use a multi-processor machine with more than 2 processors and use various parallel and concurrent garbage collection options (we cover this in Advanced JBoss Training hint hint) for maximum performance and high garbage collector throughput. However, you really need to understand how garbage collection works to tune this well. JDK 5 is mostly self-tuning.
JDK 1.4's default NewSize? is not a good guess. Bad rule of thumb: < 20% is a good NewSize?. Above 20% is dangerous due to a nasty JDK bug which can cause it to psychotically run all full garbage collections and never unsuspend or free up enough memory. JDK 5 does not seem to exhibit this bug and seems to pick more sane defaults.


JBoss/Java on Linux

If you are running JBoss AS on a Linux server, you should read this article written by Andrew Oliver, one of JBoss, a division of Red Hat, consultants on how tune JBoss/Java in a Linux server


Tomcat


Edit your server/slim/jbossweb-tomcat5?.sar/server.xml
Check the XML document for connectors you are using. For example, the HTTP connector:



<Connector port="8080" address="${jboss.bind.address}"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"/>


You should have enough threads (maxThreads) to handle (rule of thumb) 25% more than your maximum expected load (concurrent hits coming in at once)
You should have minSpareThreads equal just a little more than your normal load
You should have maxSpareThreads equal just a little more than your peak load
minSpareThreads means "on start up, always keep at least this many threads waiting idle"
maxSpareThreads means "if we ever go above minSpareThreads then always keep maxSpareThreads waiting idle"


Remove any unnecessary valves and logging. If you're not using JBoss's security, remove the security valve (see below).
Precompile JSPs. (The built-in compiler is fairly fast, it may not be worthwhile for small sites.)
Turn off "development" mode in you sever/slim/jbossweb-tomcat50.sar/conf/web.xml


RMI for Remote Invocations

By default, JBoss creates a new thread for every RMI request that comes in. This is not generally efficient on a large system. Secondly, it can be dangerous to allow unrestrained connections in the case of performance or traffic spikes or run-away connection creating clients. To remedy this you should consider switching to the pooled invoker.


Edit server/slim/conf/standardjboss.xml
Change all of the proxy bindings to the pooled invoker by changing every XML fragment reading:



<invoker-mbean>jboss:service=invoker,type=jrmp</invoker-mbean>

to



<invoker-mbean>jboss:service=invoker,type=pooled</invoker-mbean>

JBoss also has a mostly undocumented PooledInvokerHA you may try.


Log4j

Logging has a profound effect on performance. Changing the logging level to TRACE can bring the JBossAS to a crawl. Changing it to ERROR (or WARN) can speed things up dramatically.


By default, JBoss logs both to the console and server.log and by default it uses level "INFO".
Consider not logging to System.out (you may still want to redirect it to catch JVM errors)
Consider changing the log level to ERROR. Remember that JBoss watches its log4j config file for changes and you can always change configuration at runtime.
Add a category filter for your Java class hierarchy.

To turn off console logging:


Edit server/slim/conf/log4j.xml
Change the following XML fragment:



<root>
  <appender-ref ref=CONSOLE"/>
  <appender-ref ref="FILE"/>
</root>

make it read



<root>
  <appender-ref ref="FILE"/>
</root>


You can then remove this fragment:



<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
  <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/>
  <param name="Target" value="System.out"/>
  <param name="Threshold" value="INFO"/>
  <layout class="org.apache.log4j.PatternLayout">
    <!-- The default pattern: Date Priority [Category] Message\n -->
    <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n"/>
  </layout>
</appender>

To change the log level:


Edit server/slim/conf/log4j.xml
Remove/comment these XML fragments:



<category name="org.apache">
  <priority value="INFO"/>
</category>

<!-- Limit org.jgroups category to INFO -->
<category name="org.jgroups">
  <priority value="INFO"/>
</category>


Change the root category by changing this XML fragment:



<root>
  <appender-ref ref="CONSOLE"/> <!-- you may have removed this earlier -->
  <appender-ref ref="FILE"/>
</root>

to look like this



<root>
  <priority value="ERROR" />
  <appender-ref ref="CONSOLE"/> <!-- you may have removed this earlier -->
  <appender-ref ref="FILE"/>
</root>

And finally, probably the most important thing in log4j, make sure you limit the logging level on your own class hierarchy. This assumes that you are using log4j as it was intended and not writing everything to System.out. This will significantly reduce the overhead of log4j and allow you to fully enjoy the benefits of calls like if (log.isDebugEnabled()).... If you don't do this then all the logging in your code will get formatted and passed to the appender, and the threshold on the appender will weed out the log messages. This can generate a significant amount of garbage. Assuming your java package starts with "a.b", add something like this to log4j.xml:



<!-- Limit a.b category to INFO -->
<category name="a.b">
<priority value="INFO"/>
</category>

This can be added in the same area where you find the category filters for org.apache and org.jboss (see above).



Deployment Scanner


The deployment scanner scanning every 5 seconds eats up cycles especially on systems with a slow filesystem (*cough* NTFS *cough*).
See the below slimming stuff on how to turn the number of seconds such that it happens less frequently or not at all


Stateless Session Beans


EJB 1.x-2.x stateless session beans operate with an ill-advised pooling model (required by the specification). If you find that you need more than the default (10) instances consider setting the minimum pool size:

edit server/slim/conf/standardjboss.xml, scroll down to:


<container-configuration>
<container-name>Standard Stateless SessionBean</container-name>
<call-logging>false</call-logging>
<invoker-proxy-binding-name>stateless-rmi-invoker</invoker-proxy-binding-name>
<container-interceptors>

and find:



<container-pool-conf>
<MaximumSize>100</MaximumSize>
</container-pool-conf>
</container-configuration>

change it to read:



<container-pool-conf>
<MinimumSize>100</MinimumSize>
<MaximumSize>100</MaximumSize>
<strictMaximumSize/>
<strictTimeout>30000</strictTimeout>
</container-pool-conf>
</container-configuration>

For the most part a server environment doesn't want these pools growing and shrinking (because it causes memory-fragmentation and thats worse than latent heap usage). From a performance standpoint, the nuber should be big enough to serve all your requests with no blocking.


CMP tuning


Read this: http://www.artima.com/forums/flat.jsp?forum=141&thread=24532
and this: http://www.onjava.com/pub/a/onjava/2003/05/28/jboss_optimization.html
now ditch CMP and use JBossHibernate instead


Connection Pools


Don't use XA versions unless you really know you need them. XA connections do not have good performance.
Use database specific "ping" support where available for "check-connection" or use database-specific driver fail-over support rather than checking connections at all. (remember that not all tuning options may be feasible in your environment, we're talking optimal)



Slimming


When not using the mail-service (J2EE standard JavaMail client)


remove server/slim/deploy/mail-service.xml
remove server/slim/lib/mail* (mail-plugin.jar, mail.jar - JavaMail stuff)
remove server/slim/lib/activation.jar (Java Activation Framework is used by JavaMail)


When not using the cache invalidation service (used for CMP Option A beans with Cache Invalidation usually in a clustered configuration)


remove server/slim/deloy/cache-invalidation-service.xml


When not using the J2EE client deployer service (this is a not very useful J2EE spec required service for the EAR application-client.xml descriptor)


remove server/slim/deploy/client-deployer-service.xml


When not using the integrated HAR deployer and Hibernate session management services


remove server/slim/deploy/hibernate-deployer-service.xml (HAR support)
remove server/slim/lib/jboss-hibernate.jar (HAR support)
remove server/slim/lib/hibernate2.jar (Hibernate itself)
remove server/slim/lib/cglib-full-2.0.1.jar (used by Hibernate to create proxies of POJOs)
remove server/slim/lib/odmg-3.0.jar (some goofy object-relational mapping thing used by hibernate from some goofy committee http://www.service-architecture.com/database/articles/odmg_3_0.html)


When not using Hypersonic (which you should not in production)

Note that JBossMQ as deployed in the default configuration uses DefaultDS with mappings for Hypersonic. See JBoss MQ Persistence Wiki pages for more information on configuring other alternatives.


remove server/slim/deploy/hsqldb-ds.xml
remove server/slim/lib/hsqldb-plugin.jar
remove server/slim/lib/hsqldb.jar


When not using JBossMQ (our JMS server)


remove the entire server/slim/deploy/jms directory
remove server/slim/lib/jbossmq.jar


When not using the HTTPInvoker (which lets you tunnel RMI over HTTP)


remove the entire server/slim/deploy/http-invoker.sar directory


When not using XA datasources (Distributed and/or recoverable transactions)


remove server/slim/deploy/jboss-xa-jdbc.rar


If you do not need the JMX-Console then remove it


remove server/slim/deploy/jmx-console.war
Otherwise Secure it


If you do not need to make JMX calls over RMI (warning the shutdown.sh DOES do this)


remove server/slim/deploy/jmx-invoker-adaptor-server.sar
remove server/slim/deploy/jmx-adaptor-plugin.jar
or you may want to just secure the JMX invoker-adaptor instead


If you do not need the web-console


remove server/slim/deploy/management/web-console.war


If you do not need JSR-77 extensions for JMX


remove server/slim/deploy/management/console-mgr.sar


If you need neither the web-console or jsr-77 extensions


remove server/slim/deploy/management directory entirely


If you are not using console/email monitor alerts


remove server/slim/deploy/monitoring-service.xml
remove server/slim/lib/jboss-monitoring.jar


If you are not using rich property editors (JMX) or loading properties into system properties via the Properties Service


remove server/slim/deploy/properties-service.xml
remove server/slim/lib/properties-plugin.jar


The scheduler-service.xml is an example unless you have put your own in it


remove server/slim/deploy/scheduler-service.xml


If you are not using the JBoss Scheduler Manager (allows you to schedule invocations against MBeans)


remove server/slim/deploy/schedule-manager-service.xml
remove server/slim/lib/scheduler-plugin* (scheduler-plugin.jar, scheduler-plugin-example.jar)


If you do not need vendor-specific sql exception handing (just leave it, really)


remove server/slim/deploy/sqlexception-service.xml


If you are using neither client-side transaction management nor cached connections (where instead of pooling we cache connections such as in the case of JAAS->DB User -- using this means you are a bad person and need to be smacked)


remove server/slim/deploy/user-service.xml


If you do not use JBoss UUID key generation (often used with CMP primary keys, but we have database specific support as well)


remove server/slim/deploy/uuid-key-generator.sar
remove server/slim/lib/autonumber-plugin.jar


user-service.xml is an example -- unless you put something in it (your own mbeans) you can always remove it.


remove server/slim/deploy/user-service.xml


If your users directly connect to Tomcat via HTTP and do not pass through Apache/mod_jk:


open server/slim/deploy/jbossweb-tomcat50.sar/server.xml in the vi editor
remove/comment the following XML fragment:



<!-- A AJP 1.3 Connector on port 8009 -->
<Connector port="8009" address="${jboss.bind.address}"
enableLookups="false" redirectPort="8443" debug="0"
protocol="AJP/1.3"/>


If your users do not directly connect to Tomcat via HTTP and always pass through Apache/mod_jk


open server/slim/deploy/jbossweb-tomcat50.sar/server.xml in the vi editor
remove/comment the following XML fragment:



<!-- A HTTP/1.1 Connector on port 8080 -->
<Connector port="8080" address="${jboss.bind.address}"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"/>


If you do not need to be able to deploy EAR files


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment the following XML fragments from the

from under the <mbean code="org.jboss.management.j2ee.LocalJBossServerDomain" MBean


<attribute name="EARDeployer">jboss.j2ee:service=EARDeployer</attribute>

and



<!-- EAR deployer, remove if you are not using Web layers -->
<mbean code="org.jboss.deployment.EARDeployer" name="jboss.j2ee:service=EARDeployer">
</mbean>


If you do not need to be able to deploy JMS Queues


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment the following XML fragments from the

from under the <mbean code="org.jboss.management.j2ee.LocalJBossServerDomain" MBean


<attribute name="JMSService">jboss.mq:service=DestinationManager</attribute>



If you do not need to use CORBA/IIOP


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment the following XML fragments from the

from under the <mbean code="org.jboss.management.j2ee.LocalJBossServerDomain" MBean


<attribute name="RMI_IIOPService">jboss:service=CorbaORB</attribute>


If you removed the user-transaction-service.xml


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment the following XML fragments from the

from under the <mbean code="org.jboss.management.j2ee.LocalJBossServerDomain" MBean


<attribute name="UserTransactionService">jboss:service=ClientUserTransaction</attribute>


If you do not need JSR-77 support (tried to make JBoss, Weblogic and Websphere support some basic similar JMX monitoring) you can remove/comment the entire fragment from server/slim/conf/jboss-service.xml:



<!-- ==================================================================== -->
<!-- JSR-77 Single JBoss Server Management Domain -->
<!-- ==================================================================== -->
<mbean code="org.jboss.management.j2ee.LocalJBossServerDomain"
name="jboss.management.local:j2eeType=J2EEDomain,name=Manager">
<attribute name="MainDeployer">jboss.system:service=MainDeployer</attribute>
<attribute name="SARDeployer">jboss.system:service=ServiceDeployer</attribute>
<!-- <attribute name="EARDeployer">jboss.j2ee:service=EARDeployer</attribute>-->
<attribute name="EJBDeployer">jboss.ejb:service=EJBDeployer</attribute>
<attribute name="RARDeployer">jboss.jca:service=RARDeployer</attribute>
<attribute name="CMDeployer">jboss.jca:service=ConnectionFactoryDeployer</attribute>
<attribute name="WARDeployer">jboss.web:service=WebServer</attribute>
<attribute name="MailService">jboss:service=Mail</attribute>
<!-- <attribute name="JMSService">jboss.mq:service=DestinationManager</attribute>-->
<attribute name="JNDIService">jboss:service=Naming</attribute>
<attribute name="JTAService">jboss:service=TransactionManager</attribute>
<!-- <attribute name="UserTransactionService">jboss:service=ClientUserTransaction</attribute>
<attribute name="RMI_IIOPService">jboss:service=CorbaORB</attribute>-->
</mbean>


If you do not need client-side transaction management (remember that using this means you're a bad person)


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment the following XML fragments



<!--
| UserTransaction support.
-->
<mbean code="org.jboss.tm.usertx.server.ClientUserTransactionService"
name="jboss:service=ClientUserTransaction"
xmbean-dd="resource:xmdesc/ClientUserTransaction-xmbean.xml">
<depends>
<mbean code="org.jboss.invocation.jrmp.server.JRMPProxyFactory"
name="jboss:service=proxyFactory,target=ClientUserTransactionFactory">
<attribute name="InvokerName">jboss:service=invoker,type=jrmp</attribute>
<attribute name="TargetName">jboss:service=ClientUserTransaction</attribute>
<attribute name="JndiName">UserTransactionSessionFactory</attribute>
<attribute name="ExportedInterface">
  org.jboss.tm.usertx.interfaces.UserTransactionSessionFactory
</attribute>
<attribute name="ClientInterceptors">
<interceptors>
<interceptor>org.jboss.proxy.ClientMethodInterceptor</interceptor>
<interceptor>org.jboss.invocation.InvokerInterceptor</interceptor>
</interceptors>
</attribute>
<depends>jboss:service=invoker,type=jrmp</depends>
</mbean>
</depends>
<depends optional-attribute-name="TxProxyName">
<mbean code="org.jboss.invocation.jrmp.server.JRMPProxyFactory"
name="jboss:service=proxyFactory,target=ClientUserTransaction">
<attribute name="InvokerName">jboss:service=invoker,type=jrmp</attribute>
<attribute name="TargetName">jboss:service=ClientUserTransaction</attribute>
<attribute name="JndiName"></attribute>
<attribute name="ExportedInterface">
  org.jboss.tm.usertx.interfaces.UserTransactionSession
</attribute>
<attribute name="ClientInterceptors">
<interceptors>
<interceptor>org.jboss.proxy.ClientMethodInterceptor</interceptor>
<interceptor>org.jboss.invocation.InvokerInterceptor</interceptor>
</interceptors>
</attribute>
<depends>jboss:service=invoker,type=jrmp</depends>
</mbean>
</depends>

</mbean>
you can now remove server/slim/conf/xmdesc/ClientUserTransaction-xmbean.xml since its not referenced


if you do not need persistent MBean attributes (no JBoss MBeans use this by default...yet)


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment this XML fragment



<!-- ==================================================================== -->
<!-- XMBean Persistence -->
<!-- ==================================================================== -->
<mbean code="org.jboss.system.pm.AttributePersistenceService"
name="jboss:service=AttributePersistenceService"
xmbean-dd="resource:xmdesc/AttributePersistenceService-xmbean.xml">
<attribute name="AttributePersistenceManagerClass">
    org.jboss.system.pm.XMLAttributePersistenceManager
</attribute>
<attribute name="AttributePersistenceManagerConfig">
<data-directory>data/xmbean-attrs</data-directory>
</attribute>
<attribute name="ApmDestroyOnServiceStop">false</attribute>
<attribute name="VersionTag"></attribute>
</mbean>
you can also remove server/slim/conf/xmdec/xmdesc/AttributePersistenceService-xmbean.xml since it is no longer referenced


If you do not use RMI Classloading (for loading codebases from the client using the classes on the server)


open server/slim/conf/jboss-service.xml in the vi editor
remove/comment this XML fragment



<!-- ==================================================================== -->
<!-- JBoss RMI Classloader - only install when available -->
<!-- ==================================================================== -->
<mbean code="org.jboss.util.property.jmx.SystemPropertyClassValue"
name="jboss.rmi:type=RMIClassLoader">
<attribute name="roperty">java.rmi.server.RMIClassLoaderSpi</attribute>
<attribute name="ClassName">org.jboss.system.JBossRMIClassLoader</attribute>
</mbean>

and



<!-- ==================================================================== -->
<!-- Class Loading -->
<!-- ==================================================================== -->

<mbean code="org.jboss.web.WebService"
name="jboss:service=WebService">
<attribute name="ort">8083</attribute>
<!-- Should resources and non-EJB classes be downloadable -->
<attribute name="DownloadServerClasses">true</attribute>
<attribute name="Host">${jboss.bind.address}</attribute>
<attribute name="BindAddress">${jboss.bind.address}</attribute>
</mbean>


and change this XML fragment (NOTE: In JBoss 4.0, this is located in the file server/slim/deploy/ejb-deployer.xml):



<!-- EJB deployer, remove to disable EJB behavior-->
<mbean code="org.jboss.ejb.EJBDeployer" name="jboss.ejb:service=EJBDeployer">
<attribute name="VerifyDeployments">true</attribute>
...
<depends optional-attribute-name="WebServiceName">jboss:service=WebService</depends>
</mbean>

to read like this:



<!-- EJB deployer, remove to disable EJB behavior-->
<mbean code="org.jboss.ejb.EJBDeployer" name="jboss.ejb:service=EJBDeployer">
<attribute name="VerifyDeployments">true</attribute>
...
<!-- <depends optional-attribute-name="WebServiceName">jboss:service=WebService</depends> -->
</mbean>

or alternatively remove the WebServiceName? depends/attribute.



If you only want to use JBoss Naming locally (no RMI clients)


open server/slim/conf/jboss-service.xml in vi
change the following XML fragment



<!-- ==================================================================== -->
<!-- JNDI -->
<!-- ==================================================================== -->

<mbean code="org.jboss.naming.NamingService"
name="jboss:service=Naming"
xmbean-dd="resource:xmdesc/NamingService-xmbean.xml">
...
<!-- The listening port for the bootstrap JNP service. Set this to -1
to run the NamingService without the JNP invoker listening port.
-->
<attribute name="ort">1099</attribute>
...
<!-- The port of the RMI naming service, 0 == anonymous -->
<attribute name="RmiPort">1098</attribute>
...
</mbean>

To read



<!-- ==================================================================== -->
<!-- JNDI -->
<!-- ==================================================================== -->

<mbean code="org.jboss.naming.NamingService"
name="jboss:service=Naming"
xmbean-dd="resource:xmdesc/NamingService-xmbean.xml">
...
<!-- The listening port for the bootstrap JNP service. Set this to -1
to run the NamingService without the JNP invoker listening port.
-->
<attribute name="ort">-1</attribute>
...
<!-- The port of the RMI naming service, 0 == anonymous -->
<attribute name="RmiPort">0</attribute>
...
</mbean>

The RmiPort is mostly optional but it means we won't bind to 1098 so that can be helpful.

You may also want to remove the thread pool associated by removing this line from the same XML block:



<depends optional-attribute-name="LookupPool"
proxy-type="attribute">jboss.system:service=ThreadPool</depends>

and the thread pool block itself:



<!-- A Thread pool service -->
<mbean code="org.jboss.util.threadpool.BasicThreadPool"
name="jboss.system:service=ThreadPool">
<attribute name="Name">JBoss System Threads</attribute>
<attribute name="ThreadGroupName">System Threads</attribute>
<attribute name="KeepAliveTime">60000</attribute>
<attribute name="MinimumPoolSize">1</attribute>
<attribute name="MaximumPoolSize">10</attribute>
<attribute name="MaximumQueueSize">1000</attribute>
<attribute name="BlockingMode">run</attribute>
</mbean>


The JNDIView MBean (shows the JNDI naming tree) is increadibly useful from the JMX Console if you want to use it, but if you don't


open server/slim/conf/jboss-service.xml in vi
remove



<mbean code="org.jboss.naming.JNDIView"
name="jboss:service=JNDIView"
xmbean-dd="resource:xmdesc/JNDIView-xmbean.xml">
</mbean>
you can also remove server/slim/conf/xmdesc/JNDIView-xmbean.xml



If you do not use JBossSX, our integrated JAAS-based security for EJBs or Web-tier components (then you deserve to be flogged and I hope you get hacked but thats another story):


open server/slim/conf/jboss-service.xml in vi
remove



<!-- ==================================================================== -->
<!-- Security -->
<!-- ==================================================================== -->
<!--
<mbean code="org.jboss.security.plugins.SecurityConfig"
name="jboss.security:service=SecurityConfig">
<attribute name="LoginConfig">jboss.security:service=XMLLoginConfig</attribute>
</mbean>
<mbean code="org.jboss.security.auth.login.XMLLoginConfig"
name="jboss.security:service=XMLLoginConfig">
<attribute name="ConfigResource">login-config.xml</attribute>
</mbean>


edit server/slim/deploy/jbossweb-tomcatxx.sar/META-INF/jboss-service.xml and comment out these fragments:



<!-- The JAAS security domain to use in the absense of an explicit
security-domain specification in the war WEB-INF/jboss-web.xml
-->
<!-- <attribute name="DefaultSecurityDomain">java:/jaas/other</attribute>-->

and



<!-- A mapping to the server security manager service which must be
operation compatible with type
org.jboss.security.plugins.JaasSecurityManagerServiceMBean. This is only
needed if web applications are allowed to flush the security manager
authentication cache when the web sessions invalidate.
-->
<!-- <depends optional-attribute-name="SecurityManagerService"
proxy-type="attribute">jboss.security:service=JaasSecurityManager
</depends>-->


also remove/comment:




<!-- JAAS security manager and realm mapping --> <mbean code="org.jboss.security.plugins.JaasSecurityManagerService"
name="jboss.security:service=JaasSecurityManager">
<attribute name="SecurityManagerClassName">
org.jboss.security.plugins.JaasSecurityManager
</attribute>
<attribute name="DefaultCacheTimeout">1800</attribute>
<attribute name="DefaultCacheResolution">60</attribute>
</mbean>


If you're using JBossMQ you'll need to either remove (preferred) all test queues/topics from server/slim/deploy/jms/jbossmq-destinations-service.xml or comment out their security information. Add comments like the following if you choose to keep the example topics/queues:



<mbean code="org.jboss.mq.server.jmx.Topic"
name="jboss.mq.destination:service=Topic,name=testTopic">
<depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
<!-- <depends optional-attribute-name="SecurityManager">jboss.mq:service=SecurityManager</depends>
<attribute name="SecurityConf">
<security>
<role name="guest" read="true" write="true"/>
<role name="publisher" read="true" write="true" create="false"/>
<role name="durpublisher" read="true" write="true" create="true"/>
</security>
</attribute>-->
</mbean>

<mbean code="org.jboss.mq.server.jmx.Topic"
name="jboss.mq.destination:service=Topic,name=testTopic">
<depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
<!-- <depends optional-attribute-name="SecurityManager">jboss.mq:service=SecurityManager</depends>
<attribute name="SecurityConf">
<security>
<role name="guest" read="true" write="true"/>
<role name="publisher" read="true" write="true" create="false"/>
<role name="durpublisher" read="true" write="true" create="true"/>
</security>
</attribute>-->
</mbean>

<mbean code="org.jboss.mq.server.jmx.Topic"
name="jboss.mq.destination:service=Topic,name=testDurableTopic">
<depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
<!--
<depends optional-attribute-name="SecurityManager">jboss.mq:service=SecurityManager</depends>
<attribute name="SecurityConf">
<security>
<role name="guest" read="true" write="true"/>
<role name="publisher" read="true" write="true" create="false"/>
<role name="durpublisher" read="true" write="true" create="true"/>
</security>
</attribute>-->
</mbean>

<mbean code="org.jboss.mq.server.jmx.Queue"
name="jboss.mq.destination:service=Queue,name=testQueue">
<depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
<!--
<depends optional-attribute-name="SecurityManager">jboss.mq:service=SecurityManager</depends>
<attribute name="SecurityConf">
<security>
<role name="guest" read="true" write="true"/>
<role name="publisher" read="true" write="true" create="false"/>
<role name="noacc" read="false" write="false" create="false"/>
</security>
</attribute>-->
</mbean>


if using JBossMQ you'll also need to edit server/slim/deploy/jms/jbossmq-service.xml and change the InterceptorLoader? XML fragment to read like this:



<mbean code="org.jboss.mq.server.jmx.InterceptorLoader" name="jboss.mq:service=TracingInterceptor">
<attribute name="InterceptorClass">org.jboss.mq.server.TracingInterceptor</attribute>
<depends optional-attribute-name="NextInterceptor">jboss.mq:service=DestinationManager</depends>
<!--
<depends optional-attribute-name="NextInterceptor">jboss.mq:service=SecurityManager</depends>
-->
</mbean>


you'll also need to comment out or remove (from server/slim/deploy/jms/jbossmq-service.xml):



<!-- <mbean code="org.jboss.mq.security.SecurityManager" name="jboss.mq:service=SecurityManager">
<attribute name="DefaultSecurityConfig">
<security>
<role name="guest" read="true" write="true" create="true"/>
</security>
</attribute>
<attribute name="SecurityDomain">java:/jaas/jbossmq</attribute>
<depends optional-attribute-name="NextInterceptor">jboss.mq:service=DestinationManager</depends>
</mbean>
-->


alter the dead letter queue entry (server/slim/deploy/jms/jbossmq-service.xml) by commenting out the security stuff:



<!-- Dead Letter Queue -->
<mbean code="org.jboss.mq.server.jmx.Queue"
name="jboss.mq.destination:service=Queue,name=DLQ">
<depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
<!--
<depends optional-attribute-name="SecurityManager">jboss.mq:service=SecurityManager</depends>-->
</mbean>


in server/slim/deploy/jms/jms-ds.xml alter the JmsXA entry to read as follows:



<!-- JMS XA Resource adapter, use this to get transacted JMS in beans -->
<tx-connection-factory>
<jndi-name>JmsXA</jndi-name>
<xa-transaction/>
<adapter-display-name>JMS Adapter</adapter-display-name>
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
<max-pool-size>20</max-pool-size>
<!--
<security-domain-and-application>JmsXARealm</security-domain-and-application>-->
</tx-connection-factory>


If using JBoss 4, also do this 2 things:
in conf/login-config.xml, comment as follows:


<!-- Security domains for testing new jca framework
    <application-policy name = "HsqlDbRealm">
       <authentication>
          <login-module code = "org.jboss.resource.security.ConfiguredIdentityLoginModule"
             flag = "required">
             <module-option name = "principal">sa</module-option>
             <module-option name = "userName">sa</module-option>
             <module-option name = "password"></module-option>
             <module-option name = "managedConnectionFactoryName">
                 jboss.jca:service=LocalTxCM,name=DefaultDS
             </module-option>
          </login-module>
       </authentication>
    </application-policy>

    <application-policy name = "JmsXARealm">
       <authentication>
          <login-module code = "org.jboss.resource.security.ConfiguredIdentityLoginModule"
             flag = "required">
             <module-option name = "principal">guest</module-option>
             <module-option name = "userName">guest</module-option>
             <module-option name = "password">guest</module-option>
             <module-option name = "managedConnectionFactoryName">
                jboss.jca:service=TxCM,name=JmsXA
             </module-option>
          </login-module>
       </authentication>
    </application-policy> -->


and in deploy/hsqldb-ds.xml comment:


<!-- Use the security domain defined in conf/login-config.xml
<security-domain>HsqlDbRealm</security-domain> -->



If you are not using the Pooled Invoker (see tuning section, you may want to use the pooled invoker) then:



open server/slim/conf/jboss-service.xml in vi
remove:



<!--
<mbean code="org.jboss.invocation.pooled.server.PooledInvoker"
name="jboss:service=invoker,type=pooled">
<attribute name="NumAcceptThreads">1</attribute>
<attribute name="MaxPoolSize">300</attribute>
<attribute name="ClientMaxPoolSize">300</attribute>
<attribute name="SocketTimeout">60000</attribute>
<attribute name="ServerBindAddress">${jboss.bind.address}</attribute>
<attribute name="ServerBindPort">4445</attribute>
<attribute name="ClientConnectAddress">${jboss.bind.address}</attribute>
<attribute name="ClientConnectPort">0</attribute>
<attribute name="EnableTcpNoDelay">false</attribute>
<depends optional-attribute-name="TransactionManagerService">
  jboss:service=TransactionManager</depends>
</mbean>



If you do not wish to use the BeanShell deployer


open server/slim/conf/jboss-service.xml in vi
remove or comment



<mbean code="org.jboss.varia.deployment.BeanShellSubDeployer"
name="jboss.scripts:service=BSHDeployer">
</mbean>



remove server/slim/bsh* (bsh-deployer.jar, bsh-1.3.0.jar)


If you do not hot deploy files into the server/slim/deploy directory without restarting JBoss:


open server/slim/conf/jboss-service.xml in vi
change this XML frament:



<!-- An mbean for hot deployment/undeployment of archives.
-->
<mbean code="org.jboss.deployment.scanner.URLDeploymentScanner"
name="jboss.deployment:type=DeploymentScanner,flavor=URL">
...

<attribute name="ScanPeriod">5000</attribute>
...
</mbean>

to read (by adding):




<!-- An mbean for hot deployment/undeployment of archives.
-->
<mbean code="org.jboss.deployment.scanner.URLDeploymentScanner"
name="jboss.deployment:type=DeploymentScanner,flavor=URL">
...

<attribute name="ScanPeriod">5000</attribute>
<attribute name="ScanEnabled">False</attribute>
...
</mbean>


see the tuning section for other advice regarding this from a performance perspective


If you do not use clustering


The best way to do this is to start from the "default" config rather than the "all" config. Then bring over from the "all" config any miscellaneous services that you are using that aren't in "default".
If you must to start from the "all" config:
Remove server/slim/farm
Remove server/slim/deploy-hasingleton
Remove server/slim/deploy/cluster-service.xml
Remove server/slim/deploy/tc5-cluster-service.xml (OR server/slim/deploy/tc5-cluster.sar on 4.0.4 or later)
Remove server/slim/deploy/deploy.last/farm-service.xml
Remove server/slim/deploy/deploy-hasingleton-service.xml
Go into the server/slim/deploy/jms folder, remove it's contents, and replace them with the contents of the server/default/deploy/jms folder.
Edit server/slim/deploy/jbossweb-tomcat55.sar/META-INF/jboss-service.xml to remove this fragment:


<!--
   Needed if using HTTP Session Clustering or if the
   ClusteredSingleSignOn valve is enabled in the tomcat server.xml file
-->
<depends>jboss.cache:service=TomcatClusteringCache</depends>


If you do not use distributed (clustered) web sessions


Remove server/slim/deploy/tc5-cluster-service.xml (OR server/slim/deploy/tc5-cluster.sar on 4.0.4 or later)
Edit server/slim/deploy/jbossweb-tomcat55.sar/META-INF/jboss-service.xml to remove this fragment:


<!--
   Needed if using HTTP Session Clustering or if the
   ClusteredSingleSignOn valve is enabled in the tomcat server.xml file
-->
<depends>jboss.cache:service=TomcatClusteringCache</depends>


If you do not use the Farm service (replicated deployments)


Remove server/slim/farm
Remove server/slim/deploy/deploy.last/farm-service.xml


Click here for a complete list of service dependencies.





--------------------------------------------------------------------------------
Go to top   Edit this page   More info...   Attach file...

This page last changed on 11-Feb-2007 12:44:03 EST by acoliver@jboss.org.  







&copy; 2006 Red Hat Middleware, LLC. All rights reserved. Privacy Policy
回复 支持 反对

使用道具 举报

 楼主| 发表于 2007-2-26 14:25:20 | 显示全部楼层
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
====================================================
Java HotSpot VM Options
Many technologies, one platform
Java SE technologies provide the functionality to develop and run applications
&raquo; Get Java SE

Overview Technologies Reference Community Support Downloads   

At a Glance Core Database Desktop Security Tools Web Services Real-Time
&raquo;  Overview

&raquo;  Basic

&raquo;  CORBA

&raquo;  HotSpot VM

&raquo;  JNDI

&raquo;  Mntr-Mgmt

&raquo;  Tools APIs

&raquo;  XML

  

This document provides information on typical command-line options and environment variables that can affect the performance characteristics of the Java HotSpot Virtual Machine. Unless otherwise noted, all information in this document pertains to both the Java HotSpot Client VM and the Java HotSpot Server VM.

Users of JDKs older than 1.3.0 who wish to port to a Java HotSpot VM, should see Java HotSpot Equivalents of Exact VM flags.



Categories of Java HotSpot VM Options


Standard options recognized by the Java HotSpot VM are described on the Java Application Launcher reference pages for Windows, Solaris and Linux. This document deals exclusively with non-standard options recognized by the Java HotSpot VM:

Options that begin with -X are non-standard (not guaranteed to be supported on all VM implementations), and are subject to change without notice in subsequent releases of the JDK.
Options that are specified with -XX are not stable and are not recommended for casual use. These options are subject to change without notice.


Some Useful -XX Options


Default values are listed for Java SE 6 for Solaris Sparc with -server. Some options may vary per architecture/OS/JVM version. Platforms with a differing default value are listed in the description.

Boolean options are turned on with -XX:+<option> and turned off with -XX:-<option>.
Numeric options are set with -XX:<option>=<number>. Numbers can include 'm' or 'M' for megabytes, 'k' or 'K' for kilobytes, and 'g' or 'G' for gigabytes (for example, 32k is the same as 32768).
String options are set with -XX:<option>=<string>, are usually used to specify a file, a path, or a list of commands
Flags marked as manageable are dynamically writeable through the JDK management interface (com.sun.management.HotSpotDiagnosticMXBean API) and also through JConsole. In Monitoring and Managing Java SE 6 Platform Applications, Figure 3 shows an example. The manageable flags can also be set through jinfo -flag.

The options below are loosely grouped into three categories.

Behavioral options change the basic behavior of the VM.
Performance tuning options are knobs which can be used to tune VM performance.
Debugging options generally enable tracing, printing, or output of VM information.


--------------------------------------------------------------------------------

Behavioral Options

Option and Default Value
Description
-XX:-AllowUserSignalHandlers Do not complain if the application installs signal handlers. (Relevant to Solaris and Linux only.)


-XX:AltStackSize=16384 Alternate signal stack size (in Kbytes). (Relevant to Solaris only, removed from 5.0.)


-XX:-DisableExplicitGC Disable calls to System.gc(), JVM still performs garbage collection when necessary.


-XX:+FailOverToOldVerifier Fail over to old verifier when the new type checker fails. (Introduced in 6.)


-XX:+HandlePromotionFailure The youngest generation collection does not require a guarantee of full promotion of all live objects. (Introduced in 1.4.2 update 11) [5.0 and earlier: false.]


-XX:+MaxFDLimit Bump the number of file descriptors to max. (Relevant  to Solaris only.)


-XXreBlockSpin=10 Spin count variable for use with -XX:+UseSpinning. Controls the maximum spin iterations allowed before entering operating system thread synchronization code. (Introduced in 1.4.2.)


-XX:-RelaxAccessControlCheck Relax the access control checks in the verifier. (Introduced in 6.)


-XX:+ScavengeBeforeFullGC Do young generation GC prior to a full GC. (Introduced in 1.4.1.)


-XX:+UseAltSigs Use alternate signals instead of SIGUSR1 and SIGUSR2 for VM internal signals. (Introduced in 1.3.1 update 9, 1.4.1. Relevant to Solaris only.)


-XX:+UseBoundThreads Bind user level threads to kernel threads. (Relevant to Solaris only.)


-XX:-UseConcMarkSweepGC Use concurrent mark-sweep collection for the old generation. (Introduced in 1.4.1)


-XX:+UseGCOverheadLimit Use a policy that limits the proportion of the VM's time that is spent in GC before an OutOfMemory error is thrown. (Introduced in 6.)


-XX:+UseLWPSynchronization Use LWP-based instead of thread based synchronization. (Introduced in 1.4.0. Relevant to Solaris only.)


-XX:-UseParallelGC Use parallel garbage collection for scavenges. (Introduced in 1.4.1)


-XX:-UseParallelOldGC Use parallel garbage collection for the full collections. Enabling this option automatically sets -XX:+UseParallelGC. (Introduced in 5.0 update 6.)


-XX:-UseSerialGC Use serial garbage collection. (Introduced in 5.0.)


-XX:-UseSpinning Enable naive spinning on Java monitor before entering operating system thread synchronizaton code. (Relevant to 1.4.2 and 5.0 only.) [1.4.2, multi-processor Windows platforms: true]


-XX:+UseTLAB Use thread-local object allocation (Introduced in 1.4.0, known as UseTLE prior to that.) [1.4.2 and earlier, x86 or with -client: false]


-XX:+UseSplitVerifier Use the new type checker with StackMapTable attributes. (Introduced in 5.0.)[5.0: false]


-XX:+UseThreadPriorities Use native thread priorities.


-XX:+UseVMInterruptibleIO Thread interrupt before or with EINTR for I/O operations results in OS_INTRPT. (Introduced in 6. Relevant to Solaris only.)



Back to Options


--------------------------------------------------------------------------------

Performance Options

Option and Default Value
Description
-XX:+AggressiveOpts Turn on point performance compiler optimizations that are expected to be default in upcoming releases. (Introduced in 5.0 update 6.)


-XX:CompileThreshold=10000 Number of method invocations/branches before compiling [-client: 1,500]


-XXargePageSizeInBytes=4m Sets the large page size used for the Java heap. (Introduced in 1.4.0 update 1.) [amd64: 2m.]


-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.


-XX:MaxNewSize=size Maximum size of new generation (in bytes). Since 1.4, MaxNewSize is computed as a function of NewRatio. [1.3.1 Sparc: 32m; 1.3.1 x86: 2.5m.]


-XX:MaxPermSize=64m Size of the Permanent Generation.  [5.0 and newer: 64 bit VMs are scaled 30% larger; 1.4 amd64: 96m; 1.3.1 -client: 32m.]


-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.


-XX:NewRatio=2 Ratio of new/old generation sizes. [Sparc -client: 8; x86 -server: 8; x86 -client: 12.]-client: 4 (1.3) 8 (1.3.1+), x86: 12]


-XX:NewSize=2.125m Default size of new generation (in bytes) [5.0 and newer: 64 bit VMs are scaled 30% larger; x86: 1m; x86, 5.0 and older: 640k]


-XX:ReservedCodeCacheSize=32m Reserved code cache size (in bytes) - maximum code cache size. [Solaris 64-bit, amd64, and -server x86: 48m; in 1.5.0_06 and earlier, Solaris 64-bit and and64: 1024m.]


-XX:SurvivorRatio=8 Ratio of eden/survivor space size [Solaris amd64: 6; Sparc in 1.3.1: 25; other Solaris platforms in 5.0 and earlier: 32]


-XX:TargetSurvivorRatio=50 Desired percentage of survivor space used after scavenge.


-XX:ThreadStackSize=512 Thread Stack Size (in Kbytes). (0 means use default stack size) [Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier); Sparc 64 bit: 1024; Linux amd64: 1024 (was 0 in 5.0 and earlier); all others 0.]


-XX:+UseBiasedLocking Enable biased locking. For more details, see this tuning example. (Introduced in 5.0 update 6.) [5.0: false]


-XX:+UseFastAccessorMethods Use optimized versions of Get<rimitive>Field.


-XX:-UseISM Use Intimate Shared Memory. [Not accepted for non-Solaris platforms.] For details, see Intimate Shared Memory.


-XX:+UseLargePages Use large page memory. (Introduced in 5.0 update 5.) For details, see Java Support for Large Memory Pages.


-XX:+UseMPSS Use Multiple Page Size Support w/4mb pages for the heap. Do not use with ISM as this replaces the need for ISM. (Introduced in 1.4.0 update 1, Relevant to Solaris 9 and newer.) [1.4.1 and earlier: false]



Back to Options


--------------------------------------------------------------------------------

Debugging Options

Option and Default Value
Description
-XX:-CITime Prints time spent in JIT Compiler. (Introduced in 1.4.0.)


-XX:ErrorFile=./hs_err_pid<pid>.log If an error occurs, save the error data to this file. (Introduced in 6.)


-XX:-ExtendedDTraceProbes Enable performance-impacting dtrace probes. (Introduced in 6. Relevant to Solaris only.)


-XX:HeapDumpPath=./java_pid<pid>.hprof Path to directory or filename for heap dump. Manageable. (Introduced in 1.4.2 update 12, 5.0 update 7.)


-XX:-HeapDumpOnOutOfMemoryError Dump heap to file when java.lang.OutOfMemoryError is thrown. Manageable. (Introduced in 1.4.2 update 12, 5.0 update 7.)


-XX:OnError="<cmd args>;<cmd args>" Run user-defined commands on fatal error. (Introduced in 1.4.2 update 9.)


-XX:OnOutOfMemoryError="<cmd args>;
<cmd args>" Run user-defined commands when an OutOfMemoryError is first thrown. (Introduced in 1.4.2 update 12, 6)


-XX:-PrintClassHistogram Print a histogram of class instances on Ctrl-Break. Manageable. (Introduced in 1.4.2.) The jmap -histo command provides equivalent functionality.


-XX:-PrintConcurrentLocks Print java.util.concurrent locks in Ctrl-Break thread dump. Manageable. (Introduced in 6.) The jstack -l command provides equivalent functionality.


-XX:-PrintCommandLineFlags Print flags that appeared on the command line. (Introduced in 5.0.)


-XX:-PrintCompilation Print message when a method is compiled.


-XX:-PrintGC Print messages at garbage collection. Manageable.


-XX:-PrintGCDetails Print more details at garbage collection. Manageable. (Introduced in 1.4.0.)


-XX:-PrintGCTimeStamps Print timestamps at garbage collection. Manageable (Introduced in 1.4.0.)


-XX:-PrintTenuringDistribution Print tenuring age information.


-XX:-TraceClassLoading Trace loading of classes.


-XX:-TraceClassLoadingPreorder Trace all classes loaded in order referenced (not loaded). (Introduced in 1.4.2.)


-XX:-TraceClassResolution Trace constant pool resolutions. (Introduced in 1.4.2.)


-XX:-TraceClassUnloading Trace unloading of classes.


-XX:-TraceLoaderConstraints Trace recording of loader constraints. (Introduced in 6.)



Back to Options

  Java SE Site Map  


Related Resources
-  Compatibility
-  Performance
-  Security
-  Mobility
  
Related Downloads
-  XML and Web Services
-  Java Media Framework
  
Popular Topics
-  Java Platform Migration Guide (PDF)
-  Garbage Collection Tuning
-  Troubleshooting Java SE
  
Sun Resources
-  BigAdmin (sysadmin resources)
-  Sun Web Learning Center
-  Java Training
  
Related Sites
-  java.com
-  java.net
-  NetBeans
-  Java EE SDK
-  OpenJDK Project
-  Open-Source Java Project
  



Getting Started?
New to Java Center
Learning Path
Java Tutorial: Getting Started
Tutorials
Quizzes
Java SE Training

  




About Sun  |  About This Site  |  Newsletters  |  Contact Us  |  Employment
How to Buy  |  Licensing  |  Terms of Use  |  Privacy  |  Trademarks



Copyright Sun Microsystems, Inc.   A Sun Developer Network Site

Unless otherwise licensed, code in all technical manuals herein (including articles, FAQs, samples) is provided under this License.

Content Feeds
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

小黑屋|手机版|Justep Inc.

GMT+8, 2024-12-25 09:56 , Processed in 0.048018 second(s), 15 queries .

Powered by Discuz! X3.4

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表